cybersecurity: stakeholder incentives, externalities, and policy options

14
Cybersecurity: Stakeholder incentives, externalities, and policy options Johannes M. Bauer a, , Michel J.G. van Eeten b a Department of Telecommunication, Information Studies, and Media; Quello Center for Telecommunication Management and Law, Michigan State University, East Lansing, Michigan, USA b Faculty of Technology, Policy and Management, Delft University of Technology, Delft, The Netherlands article info Keywords: Cybersecurity Cybercrime Security incentives Externalities Information security policy Regulation. abstract Information security breaches are increasingly motivated by fraudulent and criminal motives. Reducing their considerable costs has become a pressing issue. Although cybersecurity has strong public good characteristics, most information security decisions are made by individual stakeholders. Due to the interconnectedness of cyberspace, these decentralized decisions are afflicted with externalities that can result in sub-optimal security levels. Devising effective solutions to this problem is complicated by the global nature of cyberspace, the interdependence of stakeholders, as well as the diversity and heterogeneity of players. The paper develops a framework for studying the co-evolution of the markets for cybercrime and cybersecurity. It examines the incentives of stakeholders to provide for security and their implications for the ICT ecosystem. The findings show that market and non-market relations in the information infrastructure generate many security-enhancing incentives. However, pervasive externalities remain that can only be corrected by voluntary or government-led collective measures. & 2009 Elsevier Ltd. All rights reserved. 1. Introduction Malicious software (‘‘malware’’) has become a serious security threat for users of the Internet. Estimates of the total cost to society of information security breaches vary but data published by private security firms, non-profit organizations, and government, all indicate that their cost is non-negligible and increasing. From a societal point of view, not only the direct cost (e.g., repair cost, losses due to fraud) but also indirect costs (e.g., costs of preventative measures) and implicit costs (e.g., slower productivity increases due to reduced trust in electronic transactions) have to be attributed to information security breaches. Bauer, Van Eeten, Chattopadhyay, and Wu (2008) in a meta-study of a broad range of research conclude that a conservative estimate of these costs may fall between 0.2% and 0.4% of global GDP. A catastrophic security failure in the world information and communication system with potentially much higher impact on the global economy is possible. Viruses, worms, and the many other variants of malware have developed from a costly nuisance to sophisticated tools for criminals. Computers worldwide, some estimate as many as one in five to one in ten, are infected with malware, often without the knowledge of the machine’s owner. While hacking continues to play a role, other infection strategies have become dominant. They range from the use of email attachments to spread malware, downloads from legitimate websites which are themselves infected, and distribution via social networks and games. Infected machines are often connected in Contents lists available at ScienceDirect URL: www.elsevierbusinessandmanagement.com/locate/telpol Telecommunications Policy ARTICLE IN PRESS 0308-5961/$ - see front matter & 2009 Elsevier Ltd. All rights reserved. doi:10.1016/j.telpol.2009.09.001 Corresponding author. Tel.: þ1517432 8003; fax: þ1517 3551292. E-mail addresses: [email protected] (J.M. Bauer), [email protected] (M.J.G. van Eeten). Telecommunications Policy 33 (2009) 706–719

Upload: independent

Post on 29-Nov-2023

0 views

Category:

Documents


0 download

TRANSCRIPT

ARTICLE IN PRESS

Contents lists available at ScienceDirect

Telecommunications Policy

Telecommunications Policy 33 (2009) 706–719

0308-59

doi:10.1

� Cor

E-m

URL: www.elsevierbusinessandmanagement.com/locate/telpol

Cybersecurity: Stakeholder incentives, externalities,and policy options

Johannes M. Bauer a,�, Michel J.G. van Eeten b

a Department of Telecommunication, Information Studies, and Media; Quello Center for Telecommunication Management and Law,

Michigan State University, East Lansing, Michigan, USAb Faculty of Technology, Policy and Management, Delft University of Technology, Delft, The Netherlands

a r t i c l e i n f o

Keywords:

Cybersecurity

Cybercrime

Security incentives

Externalities

Information security policy

Regulation.

61/$ - see front matter & 2009 Elsevier Ltd. A

016/j.telpol.2009.09.001

responding author. Tel.: þ1517432 8003; fax:

ail addresses: [email protected] (J.M. Bauer), m

a b s t r a c t

Information security breaches are increasingly motivated by fraudulent and criminal

motives. Reducing their considerable costs has become a pressing issue. Although

cybersecurity has strong public good characteristics, most information security

decisions are made by individual stakeholders. Due to the interconnectedness of

cyberspace, these decentralized decisions are afflicted with externalities that can

result in sub-optimal security levels. Devising effective solutions to this problem is

complicated by the global nature of cyberspace, the interdependence of stakeholders, as

well as the diversity and heterogeneity of players. The paper develops a framework for

studying the co-evolution of the markets for cybercrime and cybersecurity. It examines

the incentives of stakeholders to provide for security and their implications for the ICT

ecosystem. The findings show that market and non-market relations in the information

infrastructure generate many security-enhancing incentives. However, pervasive

externalities remain that can only be corrected by voluntary or government-led

collective measures.

& 2009 Elsevier Ltd. All rights reserved.

1. Introduction

Malicious software (‘‘malware’’) has become a serious security threat for users of the Internet. Estimates of the total costto society of information security breaches vary but data published by private security firms, non-profit organizations, andgovernment, all indicate that their cost is non-negligible and increasing. From a societal point of view, not only the directcost (e.g., repair cost, losses due to fraud) but also indirect costs (e.g., costs of preventative measures) and implicit costs(e.g., slower productivity increases due to reduced trust in electronic transactions) have to be attributed to informationsecurity breaches. Bauer, Van Eeten, Chattopadhyay, and Wu (2008) in a meta-study of a broad range of research concludethat a conservative estimate of these costs may fall between 0.2% and 0.4% of global GDP. A catastrophic security failure inthe world information and communication system with potentially much higher impact on the global economy is possible.

Viruses, worms, and the many other variants of malware have developed from a costly nuisance to sophisticated toolsfor criminals. Computers worldwide, some estimate as many as one in five to one in ten, are infected with malware, oftenwithout the knowledge of the machine’s owner. While hacking continues to play a role, other infection strategies havebecome dominant. They range from the use of email attachments to spread malware, downloads from legitimate websiteswhich are themselves infected, and distribution via social networks and games. Infected machines are often connected in

ll rights reserved.

þ1517 3551292.

[email protected] (M.J.G. van Eeten).

ARTICLE IN PRESS

J.M. Bauer, M.J.G. van Eeten / Telecommunications Policy 33 (2009) 706–719 707

botnets: flexible remote-controlled networks of computers that operate collectively to provide a platform for fraudulentand criminal purposes. Individuals, organizations, and nation states are targets. Attack vectors directed toward individualsand organizations include spam (most originating from botnets), variations of socially engineered fraud such as phishingand whaling, identity theft, attacks on websites, ‘‘click fraud’’, ‘‘malvertising’’, and corporate espionage. The DDoS attackson Estonia in 20071 and the spread of the cryptic Conficker worm that, early in 2009, paralyzed parts of the British andFrench military as well as government and health institutions in other countries, are recent examples of attacks on nationsand their civil and military infrastructures.2

The pervasive presence of security threats has created concern among scholars (e.g., Zittrain, 2008) and policy-makers.The OECD Ministerial Summit in Seoul in 2008 dedicated considerable attention to the issue (OECD, 2009) and theInternational Telecommunication Union (ITU) has developed an active cybersecurity agenda. Regional initiatives, includingefforts by the European Union and APEC, have elevated the visibility of the issue (e.g., Anderson, Bohme, Clayton, & Moore,2008) and an increasing number of public and private partnerships target information security at a national and supra-national level.3 These measures are complemented by national initiatives, as illustrated by the comprehensive hearing thatthe British House of Lords dedicated to the problem (House of Lords, 2007) and the announcement by the USadministration of a renewed push to improve cybersecurity (White House, 2009). Given the dynamic and multifacetednature of the problem, a broad spectrum of simultaneous measures is being considered as the best initial approach.Cybersecurity policy needs to start from a clear and empirically grounded understanding of the nature of the problemsbefore possible solutions can be devised. The analytical lenses of scholars and practitioners on malware have changedsignificantly in recent years. Security threats were initially viewed as mostly technological problems; in recent years, theperspective has broadened to include economic issues and aspects of user behavior (Anderson & Moore, 2006).

This research has clarified that, as information security comes at a cost, tolerating some level of insecurity iseconomically rational from an individual and social point of view. Although it is mostly provided by private players,cybersecurity has strong public good characteristics. Therefore, from a societal perspective, a crucial question is whetherthe costs and benefits taken into account by market players reflect the social costs and benefits. If that is the case,decentralized individual decisions also result in an overall desirable outcome (e.g., a tolerable level of cybercrime, adesirable level of security). However, if some of the costs are borne by other stakeholders or some of the benefits accrue toother players (i.e., they are ‘‘externalized’’), individual security decisions do not properly reflect social benefits and costs.This is the more likely scenario in highly interdependent information and communication systems such as the Internet.Whereas the security decisions of a market player regarding malware will be rational for that player, given the costs andbenefits it perceives, the resulting course of action inadvertently or deliberately imposes costs on other market players andon society at large. Decentralized individual decisions will therefore not result in a socially optimal level of security. Thepresence of externalities can result in internet-based services that are less secure than is socially desirable.

This paper aims at a comprehensive economic analysis of cybersecurity. In particular, given that most security decisionsare made by organizations and individuals, it is interested in how well the incentives shaping these decentralized decisionsare aligned with achieving a socially optimal level of security. This overarching objective is pursued in three interrelatedsteps. First, an economic framework is proposed to examine the co-evolution of cybercrime and cybersecurity (Section 2).The overall level of cybersecurity is the outcome of many decentralized decisions by individual stakeholders. Therefore,building on a detailed field study in seven countries, the paper, second, discusses the incentives influencing securitydecisions of important players such as ISPs, software vendors and users (Sections 3 and 4). This unique and rich data setpermits identification and analysis of the security-enhancing and security-reducing incentives perceived by and relevantfor key players. The implications of the relevant incentives for the level of security attained by individual players and thesector as a whole are investigated, with special emphasis on the emerging externalities. Whereas many security-enhancingincentives could be detected, significant externalities remain that cannot easily be overcome by private action. The papertherefore examines, third, policy options to enhance cybersecurity (Section 5). Main insights are summarized in the finalsection.

2. The co-evolution of cybercrime and information security

Information security breaches have become increasingly driven by financial motives. The particular challenges toachieve a desirable level of information security can be better understood by analyzing the interplay of two sets of activitieswhose players respond to very different incentives: the realm of cybercrime and the realm of information security. Thesetwo areas are tightly linked; developments in one directly affect conditions in the other. Decisions and effects of changes inthe incentives of players in both realms can be analyzed using a market model. It may seem frivolous to some to look at

1 See N. Anderson, ‘‘Massive DDoS attacks target Estonia; Russia accused,’’ Ars Technica, May 14, 2007. Retrieved September 3, 2009 from http://

arstechnica.com/security/news/2007/05/massive-ddos-attacks-target-estonia-russia-accused.ars#.2 See M.E. Soper, ‘‘Conficker worm shuts down French and UK Air Forces,’’ Maximum PC, February 10, 2009. Retrieved September 3, 2009 from http://

www.maximumpc.com/article/news/conficker_worm_shuts_down_french_and_uk_air_forces.3 The first Computer Emergency Response Team (CERT) was established in 1988 in response to the first internet worm. By 2009, the Forum of Incident

Response and Security Teams (FIRST), founded in 1990, and one of the many collaborative partnerships, had more than 200 members from all continents

(see http://www.first.org).

ARTICLE IN PRESS

Fig. 1. Important interdependencies in the ICT ecosystem.

J.M. Bauer, M.J.G. van Eeten / Telecommunications Policy 33 (2009) 706–719708

cybercrime as a ‘‘market’’4 but many players in this increasingly differentiated realm respond to price signals and othereconomic incentives.5 Some players, whether they are inspired by essentially laudable goals (such as ‘‘white hat’’ hackers)or destructive motives (such as cyberterrorists) do not primarily follow financial gain but their decisions may be modelledas an optimization over non-financial goals. If the logic and forms of these transactions were better known, it might bepossible to interrupt and manipulate price and other signals in ways that quench illegal activity. Little is known about theunderground economy organized around cybercrime (Schipka, 2007; Jakobsson & Ramzan, 2008; Kanich et al., 2008). Moreinformation is available on the market for information security (e.g., CSI, 2008) but detailed information is often keptproprietary. Attack and defence strategies can be analyzed at the level of individual players (e.g., a cybercriminal, a firminvesting in information security, a home user deciding on the purchase of a firewall) or at an aggregate (sector) level. Inthe latter case the interrelationships between players and the effects of their decisions on the working of the sector as awhole are of primary interest (see Fig. 1).

Information and communication services require inputs from many players. They include internet service providers(ISPs), application and service providers (App/Svc), hardware and software vendors, users, security providers, and nationaland international organizations involved in the governance of these activities. Two main types of interdependencies existamong these participants: they are, first, physically connected via the nodes and links of the networks and second,connected in commercial and non-commercial transactions. This highly interconnected system has been described as a‘‘value net’’ (e.g., Bovel & Martha, 2000; Brandenburger & Nalebuff, 1996; Li & Whalley, 2002) or as an ‘‘ecosystem’’(Fransman, 2007; Peppard & Rylander, 2006). Whereas the first term emphasizes the joint nature of value creation, thesecond one emphasizes interdependence among the participants and their co-evolution. Players (and institutions)co-evolve if one player’s actions affect but do not fully determine the choice options and strategies of others (e.g., Nelson,1994; Murmann, 2003; Gilsing, 2005). As interdependence and co-evolution are important aspects of the cybersecurityproblem, the second notion will be used throughout the paper. Each type of player in this ICT ecosystem is composed ofdifferentiated and heterogeneous players, which might be grouped into relatively similar classes (indicated by the variousindices). For example, users could be differentiated into large industrial, small and medium-sized enterprises, andresidential. The ICT ecosystem extends across national boundaries and jurisdictions. It is under relentless attack fromcybercriminals (and some benign hackers). These criminals often are located in countries with weak rules of law but theytypically launch attacks from nations with a highly developed information infrastructure (such as the US and South Korea)or a large number of users whose machines are poorly protected (such as Brazil, Turkey, and China) (Symantec, 2009, p. 8).

4 For the promises and challenges of studying criminal activity in a market framework see Becker (1968) and Ehrlich (1996). For an application to

cybersecurity see Kshetri (2006) and Franklin, Paxson, Perrig, and Savage (2007).5 Economic incentives are the factors that influence decisions by individuals and individuals in organizations. They are rooted in economic, formal-

legal, and informal mechanisms, including the specific economic conditions of the market, the interdependence with other players, laws as well as tacit

social norms. Players respond to the incentives that are perceived as relevant for their decisions. Anderson and Moore (2006) have argued that ‘‘over the

past 6 years, people have realized that security failure is caused at least as often by bad incentives as by bad design.’’ In their view, many of the problems

of information security can be explained more clearly and convincingly using the language of microeconomics: network effects, externalities, asymmetric

information, moral hazard, adverse selection, liability dumping, and the tragedy of the commons. Within this literature, the designing of incentives that

stimulate efficient behavior is central.

ARTICLE IN PRESS

Fig. 2. Markets for cybercrime and cybersecurity.

J.M. Bauer, M.J.G. van Eeten / Telecommunications Policy 33 (2009) 706–719 709

With the growth of the underground cybercrime economy, different tasks in this network of transactions apparently areincreasingly carried out by specialists. One individual or organization is rarely involved in the whole range of activitiesnecessary. The division of labor has spawned writers of malicious code, distributors of code, herders (‘‘owners’’) of botnets,spammers, identity collectors, drop services, drop site developers, and drops (Schipka, 2007). Specialization has not onlyincreased the sophistication and virility of malware but also increased the productivity of the cybercrime economy andhence reduced the cost of attack tools. Despite the heterogeneity of these actors, it is reasonable to assume that they actpurposefully rational. That is, given their information about relevant opportunities and constraints they will pursue andexpand their activities as long as incremental benefits exceed incremental costs. Too little information is typically availableto determine the exact shape of cost and benefit relations of the players empirically. However, it is possible to derive someinsights from a conceptual analysis. Other things equal (‘‘ceteris paribus’’) it is likely that each actor can only expand his orher activity level at higher cost. For example, after highly vulnerable information systems are attacked, it will beexceedingly difficult to penetrate more secure systems. Likewise, it will generally be more time-consuming and hencecostly to write code to attack software and devices that have been fortified against attacks. Hence, the short-runincremental cost curve faced by criminals will most likely be upward sloping.

The shape of the incremental benefit relationship is less obvious. If for any given type of attack the bigger rewards areeasier to reap the incremental benefit curve will be downward sloping. In principle, if more effort were ‘‘rewarded’’ withhigher spoils it could also be upward sloping. Whereas this is possible in some cases, it will probably not hold at anaggregate level, at least in societies that adhere to the rule of law. For the purposes of our further discussion we willtherefore focus on the more plausible scenario in which the slope of the incremental cost curve is higher than that of theincremental benefits curve. Individual decisions can be aggregated to represent the activities in a specific ‘‘marketsegment’’, for example, the market for malicious code, botnets, or stolen credit cards. There is no reason to believe thatthese market segments do not exhibit upward slowing supply and downward sloping demand schedules. Cybercriminalactivity could be aggregated even further to generate a representation of the overall market for cybercrime. To this end, it isnecessary to define the traded services in a more abstract way. In this aggregate market, supply is shaped by the cost ofbreaching information systems. Demand is shaped by the benefits of such breaches and the corresponding willingness topay for them. The intersection of these two schedules will determine the overall level of fraudulent and criminal securitybreaches (see Fig. 2). As long as the incremental benefits in this market exceed the incremental costs, cybercriminal activitywill expand (and vice versa).

The shape and position of the incremental cost and benefit schedules is influenced by changes in the technological basisof information systems, the technology of attacks, and the institutional and legal framework governing informationsecurity and its violations. For example, a deeper division of labor in the underground economy, improvements in attacktechnologies, or less diligent law enforcement will reduce the cost of cybercrime, and therefore shift the supply curve to thelower right (as illustrated in Fig. 2). Accordingly, for a given demand curve for cybercrime (i.e., ceteris paribus)cybercriminal activity will increase. In contrast, higher penalties for cybercrime, stricter law enforcement that elevates therisk of being caught, and measures that improve information security all increase the cost of criminal activity and henceshift the supply curve to the upper left. For a given demand curve for cybercrime (i.e., ceteris paribus) this implies thatcybercriminal activity will diminish. The demand schedule for cybercrime is also influenced by forces external to themarket for cybercrime. It shifts to the upper right as the net benefits of criminal activity increase, for example, if thenumber of users of electronic transactions (and hence the potential rewards from cybercrime) grow, connectivity is

ARTICLE IN PRESS

J.M. Bauer, M.J.G. van Eeten / Telecommunications Policy 33 (2009) 706–719710

increasing globally, mobile devices are used more, and ICT use in the private and public sectors becomes more widespread.Ceteris paribus this will lead to increased cybercriminal activity. On the other hand, if the net rewards of cybercrime shrink,for example, because of a lower user base, tighter security, and more stringent law enforcement, the schedule is shifted tothe lower left, reducing cybercrime. Presently, the factors contributing to an intensification of cybercrime seem to outweighthe others, putting upward pressure on the overall level of cybercrime.

Like the realm of cybercrime, information security can be analyzed in a market framework. It is best seen as anaggregate made up of individual sub-markets in which vendors supply and heterogeneous types of users demandinformation security products and services. Such services are offered to different users by a range of specialized vendors,including security service providers (anti-malware software providers, suppliers of network monitoring hardware andsoftware such as Cloudmark), Internet Service Providers, or they may be provided in-house by a specialized department.These players are supported by non-profit groups like Spamhaus, the Messaging Anti-Abuse Working Group (MAAWG), andmany blacklisting services that monitor spam and other malicious activity. Security services are ultimately demanded bydifferent types of users including residential users, for-profit and not-for-profit organizations of different sizes, andgovernment agencies. Within its specific context, incentives, and available information, each decentralized player willagain make purposefully rational decisions. Once all adjustments are made, it is reasonable to assume that each decision-maker will strive for a situation in which the perceived incremental benefits of additional security measures areapproximately equal to the incremental costs of such measures. Given the state of security technology, the incremental costof improved security will most likely increase. At the same time, with very few if any exceptions, the benefits of additionalsecurity measures will decrease. Consequently, the optimal level of security is found where the incremental cost andbenefits of security are equated. In a dynamic perspective, under conditions of risk and uncertainty, although the analysis issomewhat more cumbersome, the same principal decision rule applies: that the optimal level of security is found wherethe adjusted and discounted incremental benefits equal the incremental costs (Gordon & Loeb, 2004).

Many of the challenges of reaching an optimal level of information security at the aggregate level are rooted in apotential mismatch between the perceived individual and social benefits and costs of security. In information systems,positive and negative externalities are closely intertwined aspects of security decisions. Additional security investments ofend users or of an intermediate service provider such as an ISP also benefit others in the ICT system and are thus associatedwith positive externalities. Similarly, insufficient security investment by a player exerts a negative externality on others inthe system. The externality problem is complicated by the economic features of information and security markets. Manyinformation products are produced under conditions of high fixed but low incremental costs. Unfettered competition willdrive prices quickly to incremental costs. Moreover, many products and services are characterized by positive networkeffects. Firms will respond to such conditions with product and price differentiation, attempts to capitalize on first moveradvantages, versioning, and other strategies to overcome these challenges (Illing & Peitz, 2006; Shapiro & Varian, 1999;Varian, Farrell & Shapiro, 2005). Given the current legal framework of information industries, in particular the currentliability regime, security is a quasi-public good. Suppliers will typically not be able to mark-up their products in accordancewith the social value of their security features but only with the private value, leading to an under-provision of security. Ifsuppliers face a trade-off between speed to market and security, given positive network effects, speed to market may takeprecedence over security testing (Anderson, 2001; Anderson & Moore, 2006).

Cybercriminal activity and decisions in information security markets co-evolve, mutually influencing but not fullydetermining each other’s course. An increased level of cybercrime (determined in the market for cybercrime) will increasethe effort and cost of maintaining a given level of information security. At the same time, the benefits of increased securitymay increase. Consequently, the overall cost of security will increase for all participants in the ICT ecosystem even if thelevel of security remains unchanged. In contrast, if effective measures reduce the level of cybercrime, the costs of securitywill decline as well. However, the overall level of security may not increase as users may decide to maintain a given levelwith lower expenditures. There is an asymmetry in the relation between the markets for cybercrime and cybersecurity.Whereas higher or lower levels of cybercrime will increase or decrease the overall cost of security, the effect of suchchanges on the level of security remains ambiguous. In contrast, increased security will unequivocally increase the cost ofcybercrime and/or reduce the benefits of cybercrime. As these two effects work in the same direction, increased securitywill reduce the level of cybercrime (Van Eeten & Bauer, 2008). However, it may trigger a new round in the technologicalrace between cybercriminals and defenders.

3. Security incentives of stakeholders

The overall level of cybersecurity emerges from the decentralized decisions of the players in the ICT ecosystem. It istherefore important to understand how these decisions are made and what factors (‘‘incentives’’) influence them. Verylittle detailed empirical evidence is available on the security incentives of stakeholders and the externalities that emanatefrom them. This section reports findings of recent qualitative field research designed to explore these incentives. In thecourse of 2007, a team of researchers from the Delft University of Technology and Michigan State University conducted 41in-depth structured interviews with 57 professionals of organizations operating in networked computer environments thatare confronted with malware. Interviewees represented a stratified sample of professionals from different industrysegments (e.g., hardware, software, service providers, and users) in seven countries (Australia, Canada, Germany, the

ARTICLE IN PRESS

Table 1Security incentives of players in the ICT value chain

Player Security-enhancing Security-reducing

Internet service

providers (ISPs)

Cost of customer support Cost of security measures

Cost of abuse management Cost of customer acquisition

Cost of blacklisting Legal provisions that shield ISPs

Loss of reputation, brand damage

Cost of infrastructure expansion

Legal provisions requiring security

Software vendors Cost of vulnerability patching Cost of software development and testing (time to market)

Loss of reputation, brand damage

Benefits of functionality

Benefits of compatibility

Benefits of user discretion

Licensing agreements with hold-harmless clauses

E-commerce providers

(banks)

Benefits of online transaction growth Cost of security measures

Trust in online transactions Benefits of usability of the service

Loss of reputation, brand damage

Users Awareness of security risks, realistic self-efficacy,

exposure to cybercrime

Poor understanding of risks, overconfidence, cost of security

products and services

J.M. Bauer, M.J.G. van Eeten / Telecommunications Policy 33 (2009) 706–719 711

Netherlands, United Kingdom, France, and the United States). Moreover, experts involved in the governance of informationsecurity issues such as Computer Emergency Response Teams (CERTs) and regulatory agencies were interviewed. Here wefocus on the incentives of ISPs, software vendors, E-commerce providers (most notably banks and other financial serviceproviders), and users.6 The aim is to identify if and how these incentives give rise to negative security externalities, whichrequire some form of voluntary or government-led collective action.

The interdependence between stakeholders has, to a certain degree, contributed to a blame game between individualplayers accusing each other of insufficient security efforts. Whereas Microsoft, due to its pervasive presence, is a frequenttarget (e.g., Perrow, 2007; Schneier, 2004), the phenomenon is broader: security-conscious ISPs blame rogue ISPs, ISPsblame software vendors, countries with higher security standards blame those lacking law enforcement, and so forth.Although these claims are sometimes correct, they do not represent an accurate picture of the complicated relations. Thefield research into the incentives and the way externalities percolate through the system reveals a more nuanced picture.Several admittedly imperfect mechanisms, such as reputation effects and reciprocal sanctions among players, exist thathelp align private and social costs and benefits. Moreover, the high interconnectedness may result in the internalization ofexternalities at other stages of the value chain; the associated costs are therefore not externalized to society at large but toother players and may indirectly be charged back to the originators. For every player interviewed in the study, multiple andsometimes conflicting incentives were at work, leaving the net effect contingent upon the relative influence of the specificdrivers at work. Table 1 summarizes important incentives that shape the security decision of participants in the ICTecosystem.

Internet Service Providers (ISPs) are key market players and often criticized for poor security investment levels.Nonetheless, ISPs operate under several security-enhancing incentives (although their effectiveness is mediated by theISP’s business model). Commercial ISPs will take security into account when it affects their revenue stream and bottomline. This is vividly illustrated by the example of viruses and spam. Early on, ISPs argued that emails were the personalproperty of recipients and that an inspection of the content of mails was a violation of privacy. Consequently, theresponsibility for protecting their own machines and for dealing with spam was attributed to end users. With theexorbitant growth of spam, currently constituting upwards of 80% of all emails, i.e., in the order of 200 billion messages perday, the financial implications for ISPs also changed fundamentally. Not only did the flood of spam become a burden fornetwork infrastructure that would have required additional investment, the malware imported onto the network didindirectly affect the ISP’s cost. Users of infected machines started to call the help desk or customer service at a fairly highcost per call to the ISP. Malicious traffic sent from infected machines triggers abuse notifications from other ISPs andrequests to fix the problem, typically requiring even more expensive outgoing calls to customers. In extreme cases, thewhole ISP could be blacklisted (see below), causing potentially serious customer relations and reputation problems. Facingthis altered economic reality, ISPs reversed their stance with little fanfare and started to filter incoming mail and to managetheir customers’ security more proactively.

In the present environment, ISPs operate under several more or less potent incentives. The strength of these incentivesis mediated by the business model adopted by an ISP. ‘‘Rogue’’ ISPs whose business model is based on the hosting of shadyactivities will respond differently than commercial ISPs seeking legitimate business. Costs of customer support and abuse

6 See Van Eeten and Bauer (2008) for the full project report and OECD (2009), in particular Part II. A full list of the organizations that participated in

the field study is provided in Van Eeten and Bauer (2008, pp. 67–68).

ARTICLE IN PRESS

J.M. Bauer, M.J.G. van Eeten / Telecommunications Policy 33 (2009) 706–719712

management as well as the cost of additional infrastructure that might be required to handle malicious traffic all have animmediate effect on the bottom line and will increase the incentives to undertake security-enhancing measures. Loss ofreputation and brand damage work indirectly (and probably slower) but exert pressure in the same direction. ISPs areembedded in an interdependent system of service providers. If contacts via the abuse management desk are ineffective,other ISPs have a range of escalating options to retaliate for poor security practices with regard to outgoing malicioustraffic, even if the origin is an individual user.

Blacklists (or ‘‘blocklists’’), inventories of IP addresses and email addresses reported to have sent spam and other formsof malicious code, are regularly used by ISPs to filter and block incoming traffic. Lists are typically maintained by non-profitorganizations such as SpamCop, Spamhaus, or the Spam and Open Relay Blocking System (SORBS).7 Other lists, such asthose maintained by RFC-Ignorant, detail IP networks that have not implemented certain RFC’s—‘‘requests for comments’’,which are memoranda published by the Internet Engineering Task Force (IETF). This includes, for example, ISPs that do nothave a working ‘‘abuse@domain’’ address.8 These lists are not uncontested and their promoters are sometimes seen as‘‘vigilantes.’’ But they are widely used and provide real-time countermeasures for network administrators. Each blacklistorganization has procedures for delisting an ISP once the originating IP address ceases the malicious activity. Substantialpresence on one or more of these lists, reflected in the listing of many IP addresses belonging to an ISP for an extended timeperiod, will drive up customer support and abuse management costs because, for example, emails originating fromblacklisted ISPs may not be delivered, resulting in customer dissatisfaction and complaints. It may also trigger subsequentreputation and revenue losses for an ISP. Both effects create an incentive to improve security measures, at least to respondin a timely fashion to abuse requests. In extreme cases, an entire ISP (and not just IP addresses or address ranges) may beblocked by blacklists and de-peered by other ISPs, raising the costs of this ISP significantly, possibly to the point where itsbusiness model becomes unsustainable.

All these incentives work in favor of enhanced security measures. On the other hand, the costs of increasing security,legal provisions that shield ISPs from legal liability, and the costs of customer acquisition all work in the opposite direction.Other things equal, they constitute incentives to adopt a lower level of information security. The net effect on ISPs is hencedependent on the relative strength of the components of this web of incentives. The high cost of customer calls (estimatedby some ISPs to be as high as $12 per incoming and $18 per outgoing call), while providing an incentive to find alternativesolutions to enhance security of end users, also may provide an incentive to ignore individual cases that have not triggerednotifications to blacklisting services or abuse requests from other ISPs. Most ISPs estimated that only a small percentage ofthe infected machines on their network show up in abuse notifications. Considerable advances and lower cost of securitytechnology, on the other hand, have enabled ISPs to move their intervention points closer to the edges of their network andthus automate many functions in a cost-effective way. In an escalating response, machines on the network may initially bequarantined with instructions to the user to undertake certain measures to fix a problem. Only in cases that cannot besolved in this fashion may customer calls become necessary. Overall, while the interdependencies among ISPs result in theinternalization of some of the external effects, this internalization is partial at best. Hence, it is likely that the highlydecentralized system of decision-making yields effort levels that, while higher than often assumed, nevertheless fall shortof the social optimum.

Software is a critical component in the ICT value chain. Exploitation of vulnerabilities is one important attack vector. Themarket for exploits has become increasingly sophisticated as seen, for example, in the steady increase in zero-day exploits.Like ISPs, software vendors work in a complex set of potentially conflicting incentives whose overall net effect is difficult todetermine. One strong factor working to enhance security efforts is the cost of vulnerability patching, which comprises thecost of patch development, testing, and deployment. As software is typically installed in many versions, configurations,and contexts, the cost of patch development can be very significant. From the perspective of software vendors it maytherefore be advantageous to invest in security upfront rather than have the follow-up costs of patching. However, giventhe complexity of modern software packages and the benefits of short time-to-market, it will not be economical to invest tothe point where all potential flaws are eliminated. Loss of reputation and brand damage also strengthen the incentive toinvest in security. However, reputation effects may work only slowly and are further weakened if the vendor has a strongmarket position, either because of a large installed base or because of limitations in the number of available substituteproducts. Nonetheless, as the security enhancements in the transition from Windows XP to Windows XP Service Pack 2 toWindows Vista illustrate, reputation effects do work.

At the same time, there are several incentive mechanisms that, other things equal, weaken the effort to provide securesoftware. Time-to-market is lengthened by software testing and the cost of software development is increased. Inindustries with network effects and first-mover advantages, such delays may result in significantly reduced revenues overthe product life cycle (Anderson & Moore, 2006). Thus, the more competitive the software market segment, the lower theincentive to improve security beyond a minimum threshold. Moreover, in software design complicated trade-offs have tobe made between different design aspects. Security often is in conflict with functionality, compatibility, and user discretionover the software’s configuration (a benefit that many users covet). Lastly, hold-harmless provisions in software licensesand shrink-wrap license agreements largely relieve software vendors from liability for financial damages stemming from

7 See http://www.spamcop.net, http://www.spamhaus.org/zen, and http://www.de.sorbs.net/.8 See http://www.rfc-ignorant.org/.

ARTICLE IN PRESS

J.M. Bauer, M.J.G. van Eeten / Telecommunications Policy 33 (2009) 706–719 713

software flaws. The diffusion of software for mobile devices and the expansion of open source software also increasevulnerabilities. The fear that increased liability of software vendors might reduce innovation rates is not unfounded but thestrength of this effect is unknown.

E-commerce providers form another important class of market players. One particularly interesting sector is financialservice providers, not least because they are a high priority target for attackers. This is a rather diverse sector,encompassing different types of banks, credit card companies, mutual funds, insurance companies, and so forth. The rulesfor each of these players differ in detail. Focusing predominantly on merchant banks, Van Eeten and Bauer (2008)concluded that these financial service providers are to a considerable degree able to manage risks emanating from theircustomer relations. However, they need to make choices balancing enhanced security and the growth of their electronicbusiness. In principle, they could use highly secure platforms to conduct e-commerce transactions. Such an approachwould likely have detrimental effects on users as it decreases the convenience of conducting business. Financialorganizations thus face a trade-off between higher security and migrating transactions to cost-saving electronic platforms.Many financial service providers offer compensation for losses incurred by their customers from phishing or otherfraudulent actions as part of this overall security decision. This practice aligns the incentives of the financial serviceprovider with the goal of improved security (but not the incentives of individual users). Businesses other than financialservice providers may often not be in a position to manage externalities associated with their clients. Therefore, moresignificant deviations between private incentives and social effects may exist, resulting in a sub-optimally low level ofsecurity investment by these firms.

Large businesses (firms with 250 and more employees) are a heterogeneous group. Many large business users haveadopted risk assessment tools to make security decisions (Gordon & Loeb, 2004). Their diligence will vary with size andpossibly other factors such as the specific products and services provided. Two other groups of players that deservementioning are small and medium enterprise (SMEs, typically defined as enterprises with fewer than 250 employees,including microenterprises) and residential users. Although this is a large and diverse group, these players also are inseveral respects similar. Like other participants, they work under multiple and potentially conflicting incentives. Unlikelarger businesses that may be able to employ information security specialists, either in-house or via outsourced services,many SMEs and residential users have insufficient resources to prevent or respond to sophisticated types of attacks.Whereas awareness of security threats has increased, there is mounting information that many residential usersunderestimate their exposure and overestimate their efficacy in dealing with risks (LaRose, Rifon, & Enbody, 2008).Although these constitute similarities between SMEs and residential users there are also differences. In general, one canassume that businesses employ a more deliberate, instrumentally rational form of reasoning when making securitydecisions. In both cases, however, the benefits of security expenses will to a large degree flow to other users.

Individual businesses and users may suffer from the perception that their own risk exposure is low, especially if othersprotect their machines, the well-known free rider phenomenon. On the other hand, given increased information, a growingnumber of users in this category is aware of the risks of information security breaches. Thus, they realize to a certain extentthat they are the recipients of ‘‘incoming’’ externalities. Overall, one can expect that on average these classes of users willnot be full free riders. Whereas some individuals and SMEs may over-invest, there is evidence that most will not invest insecurity at the level required by the social costs of information security breaches (Kunreuther & Heal, 2003). Thisconclusion is corroborated by the observation that many individual users do not purchase security services, do not even usethem when offered for free by an ISP or a software vendor,9 and turn off their firewalls and virus scanners regularly if theyslow down certain uses, such as gaming.

The ICT ecosystem also has other security loopholes. Some have to do with the engineering conventions of the Internet.TCP/IP was designed as a very open and transparent protocol but not with security in mind. This openness has facilitated abroad variety of applications and services but is also at the heart of security problems. Some of these problems could beremedied with overlay network infrastructure and some will be mitigated by moving to IPv6. Moreover, the long-timeconvention of registrars to grant five-day grace periods to allow the ‘‘tasting’’ of domain names has been abused; changesto this practice have been approved by the Internet Corporation for Assigned Names and Numbers (ICANN) but have not yetbeen implemented.

4. Analysis of the emerging patterns

What patterns can be observed in the empirical findings on the incentives of market players and externalities thatemerge from them? Pointing to a variety of reports that show increases in malicious attack trends, one might conclude thatmarkets are not responding adequately. Our field work and analysis revealed a more diverse, graded picture, indicating anumber of market-based incentive mechanisms that enhance security but also other instances in which decentralizedactions are afflicted with externalities and possibly sub-optimal outcomes. The findings suggest that all players work undersome incentives that work in the correct direction. Furthermore, all market players studied experienced at least some

9 For example, XS4All, a Dutch ISP, offered security services to their customers but less than 10% signed up. Eventually, the ISP decided to offer

security software as part of the subscription, but even then many users would not install it. Many users also do not use automatic updates to their

software.

ARTICLE IN PRESS

J.M. Bauer, M.J.G. van Eeten / Telecommunications Policy 33 (2009) 706–719714

consequences of security trade-offs on others. In other words, feedback loops (e.g., reputation effects or threats to therevenues of a company from poor security practices) were at work that brought some of the costs imposed on others backto the agent that caused them. However, in many cases these feedbacks were too weak, localized, or too slow to moveagents’ behavior swiftly towards more efficient social outcomes. In all cases, moreover, incentive mechanisms exist thatundermine security. Overall, across the ecosystem of all the different market players, three typical situations emerge:

(1) No externalities: This concerns instances in which a market player, be it an individual user or an organization,correctly assesses security risks, bears all the costs of protecting against security threats (including those associated withthese risks) and adopts appropriate countermeasures. Private and social costs and benefits of security decisions are aligned.There may still be significant damage caused by malware, but this damage is borne by the market player itself. Thissituation would be economically efficient but, due to the high degree of interdependency in the Internet, it is relatively rare.Essentially, this scenario only applies to closed networks and user groups.

(2) Externalities that are borne by agents in the value net that can manage them: This concerns instances in which a marketplayer assesses the security risks based on the available information but, due to the existence of (positive or negative)externalities, the resulting decision deviates from the social optimum. Such deviations may be based on lack of incentivesto take costs imposed on others into account, but it can also result from a lack of information, insufficient skillsto cope with security risks, or financial constraints faced by an individual or organization. If somebody in the value netinternalizes these costs and this agent can influence the magnitude of these costs – i.e., it can influence the incentives andsecurity trade-offs of the agents generating the externality – then the security level achieved in the ICT ecosystem will becloser to the social optimum than without such internalization. This scenario depicts a relatively frequent case, typicallysituations in which a commercial relation exists between players. Thus, numerous examples were found whereexternalities generated in one segment of the value net were internalized by other market players. ISPs and financialservice providers that internalize costs originating from end users are two examples that were discussed in moredetail above.

(3) Externalities that are borne by agents who cannot manage them or by society at large: An individual unit may correctlyassess the security risks given its perceived incentives but, due to the existence of externalities, this decision deviates fromthe social optimum. Alternatively, an individual unit may not fully understand the externalities it generates for otheractors. Unlike in scenario two, agents in the ICT ecosystem that absorb the cost are not in a position to influence them – i.e.,influence the security trade-offs of the agents generating the externality. Hence, costs are generated for the whole sectorand society at large. These are the costs of illegal activity or crime associated with malware, the costs of restitution of crimevictims, the costs of e-commerce companies buying security services to fight off botnet attacks, the cost of law enforcementassociated with these activities, and so forth. Furthermore, they may take on the more indirect form of slower growth ofe-commerce and other activities. Slower growth of ICT use may entail a significant opportunity cost for society at large ifthe delayed activities would have contributed to economic efficiency gains and accelerated growth.

Among the most poignant cases in this category are the externalities caused by lax security practices of end users. Someof these externalities are internalized by other market players that can mitigate them, most notably ISPs that canquarantine infected end users. ISPs have incentives to deal with these problems but only in so far as they themselves sufferconsequences from the end user behavior, e.g., by facing the threat that a significant part of their network gets blacklisted.Estimates mentioned in the interviews suggested that the number of abuse notifications received by ISPs represents only afraction of the overall number of infected machines in their network. This observation suggests that a considerable share ofthe externalities originating from users may not be mitigated.

Consequently, a large share of these costs of poor security practices of end users is borne by the sector as a whole andsociety at large, typically in the form of higher direct, indirect and implicit costs (see Van Eeten, Bauer, & Tabatabaie, 2009).These externalities are typically explained by the absence of incentives for end users to secure their machines. It would bemore precise, however, to argue that end users do not perceive any incentives to secure their machines, in part due toinsufficient information. While malware writers have purposefully chosen to minimize their impact on the infected hostand to direct their attacks at other targets, there is also a plethora of malware which does in fact attack the infectedhost—most notably to scour personal information that can be used for financial gain. In that sense, end users should have astrong incentive to secure their machines. Unsecured machines cannot differentiate between malware that does or doesnot affect the owner of the machine. If the machine is not sufficiently secured, then one has to assume that all forms ofmalware can be present. The fact that this is not perceived by the end user is an issue of incomplete information rather thana principal failure of the respective incentive mechanism.

The discussion so far has assumed that the threat level does not fully undermine the operations of the ICT ecosystem.However, there is a risk of not just security failures or even a disaster but of catastrophic failure. Given the dependence ofall aspects of global society on the Internet (and electronic communications in general), widespread and extended failurecould certainly have catastrophic consequences. That such a pervasive failure or technological terrorism has not yethappened and has a low probability complicates the formulation of a response (Posner, 2004). Like other events with a lowbut non-trivial probability, it could be considered a ‘‘black swan’’ event (Taleb, 2007). Cost-benefit analysis of suchcatastrophic events would help in shaping more rational responses but it is extremely difficult. Complications include thechoice of an appropriate time horizon, the quantification of the risk in question, problems of monetizing a wide range ofqualitative impacts, and the determination of social discount rates applied to monetized future events. Nonetheless,assessing the potential costs of internet security breaches would greatly benefit from such an exercise.

ARTICLE IN PRESS

Table 2Principal policy instruments to enhance information security

Predominant policy

vector

Cybercrime Information security

Legal and regulatory

measures

� National legislation

� Bi- and multi-lateral treaties

� Forms and severity of punishment

� Law enforcement

� National legislation/regulation of information security

� Legislation/regulation of best practices to enhance information

security

� Liability in case of failure to meet required standards

� Tax credits and subsidies

Economic measures � Measures that increase the direct costs of

committing fraud and crime

� Measures that increase the opportunity costs of

committing fraud and crime

� Measures that reduce the benefits of crime

� Level of financial penalties for violations of legal/regulatory

provisions (compensatory, punitive)

� Payments for access to valuable information

� Markets for vulnerabilities

� Insurance markets

Technical measures � Redesign of physical and logical internet

infrastructure

� Information security standards

� Mandated security testing

� Peer-based information security

Informational and

behavioral measures

� National and international information sharing on

cybercrime

� National and international information sharing on information

security

� Educational measures

J.M. Bauer, M.J.G. van Eeten / Telecommunications Policy 33 (2009) 706–719 715

5. Policy implications

This section explores the role of policy and regulation in overcoming cybersecurity problems. The analysis in Sections 3and 4 suggests that decentralized decision-making often generates correct incentives for stakeholders and that deviationsfrom a desired security level often trigger the appropriate feedbacks. However, many instances were also identified wherethe resulting incentives were weak, too slow, or the net effects of the security-enhancing and security-reducing incentivemechanisms could be either positive or negative. Moreover, the ICT ecosystem is under relentless attack from organizedgangs and reckless individuals who dispose over increasingly powerful and sophisticated hacking tools and malicious code.Many of these activities are organized in countries where the cost of engaging in cybercrime are low: law enforcement isweak or non-existent; due to dire overall economic conditions, the opportunity costs of participating in cybercrime ratherthan pursuing other gainful employment are low; and technological means enable criminals to operate swiftly across manynational boundaries. At the same time, numerous incremental measures to improve security beyond the status quo areknown but may not be undertaken by decentralized players because they are afflicted with positive externalities. Thebenefits of such measures potentially help all stakeholders but their costs often need to be borne by one particular group.

Participants in the ICT ecosystem suffer from a prisoner’s dilemma problem: everybody is worse off if decisions aremade in a non-cooperative fashion. Where repeated interactions happen, it is partially overcome, as is illustrated by thecooperation among ISPs. Enhancing cybersecurity at a broader level will have to overcome this coordination andcooperation issue: it is a collective action problem. Whether such collective measures can be identified and successfullyimplemented without disadvantages that outweigh the potential improvements needs further examination. Theinformation economics literature has begun to examine these issues during the past few years (Anderson et al., 2008;Anderson & Moore, 2006; Camp, 2006, 2007). Many of the aspects of information markets that aggravate security issuescan be explored more systematically from an economic perspective including network effects, pervasive externalities, theassignment and evasion of liability rules, and moral hazard. Often, the response is to design a broadside of policyinstruments, targeting multiple goals with multiple means simultaneously. Whereas this may eventually turn out as arational approach, there is also a risk that individual measures counteract and neutralize each other.

There are two principal vectors for implementing such collective measures. They can target the realm of cybercrime orthe information security decisions of stakeholders in the ICT ecosystem. Both areas may be addressed with measures infour categories (or with a mix of such measures): legal and regulatory measures, economic means, technical solutions, andinformational/behavioral approaches (see Table 2). Cybercrime can principally be reduced by increasing its costs and byreducing its benefits. Strengthening law enforcement via national legislation and multi-national and international treaties,such as the European Convention on Cybercrime, which has been ratified by 23 countries and signed by 23 others,10 is oneimportant precondition to credible enforcement as it defines a legal basis for intervention. Potentially equally important for

10 See http://conventions.coe.int/Treaty/Commun/ChercheSig.asp?NT=185&CM=&DF=&CL=ENG (last visited February 28, 2009).

ARTICLE IN PRESS

J.M. Bauer, M.J.G. van Eeten / Telecommunications Policy 33 (2009) 706–719716

the preventative effect of law enforcement are the forms and severity of punishment as well as the effectiveness andexpediency of law enforcement. International collaboration is a particularly difficult obstacle that needs to be overcome inthis area. Wang and Kim (2009) found that joining the Convention had a deterrent effect on cyberattacks. On the otherhand, Png, Wang, and Wang (2008) found an insignificant effect of domestic enforcement. Even so, the shutting down ofMcColo in November 2008 caused a significant temporary drop in spam activity.11

Given the fact that much criminal activity is organized (but not necessarily executed) in countries with relatively weakrules of law, other measures might be needed in addition to legislative and regulatory initiatives. A number of interestingproposals have been made to reduce spam via technical–economic mechanisms, such as the requirement to have a token ormake a payment before a mailbox can be accessed (e.g., Loder, Van Alstyne, & Wash, 2004). Although such measures areprincipally effective and interesting, they will only take care of malware disseminated via email. Moreover, theireffectiveness will depend on a critical mass of users adopting the method. General measures to increase the opportunitycosts of cybercrime, such as the creation of attractive employment opportunities for skilled programmers, will contributeto a reduction of activity long term. Many of the architects of the internet have proclaimed that it was not built for thecurrent onslaught of legal and illegal activity. Several initiatives, such as DNS Security Extensions (DNSSEC) which isdesigned to make the domain name system more secure, are underway. Several other protocols are deployed in the weband others may be rolled out as an underlay network. Whereas all of these measures already have triggered responses byattackers, they do make cybercrime more expensive and one would expect that this has a dampening effect, at least in theshort term and other things equal. Furthermore, measures of information sharing at a national level in CERTs and CSIRTs aswell as at the international level, such as in the more than 200 organizations collaborating in Forum for Incident ResponseTeams (FIRST) are important steps in the right direction.

Whereas measures devised to fight cybercrime are relatively uncontested, this is not true for policies directed towardsinformation security markets. Nonetheless, there is an emerging discussion in many countries as to whether such measuresare appropriate, whether they should be targeted at key players in the ICT ecosystem, and which organizations should be incharge of such initiatives (Anderson et al., 2008; OECD, 2009). Several countries have adopted laws against spam. Althoughtheir effectiveness is sometimes questioned, legislation such as the US CAN-SPAM Act of 2003 and subsequentimplementation measures by the Federal Communications Commission (FCC) and the Federal Trade Commission (FTC)have provided a legal basis to prosecute several spammers with deterrent effects. The recent discussion goes beyond suchmeasures to explore specific regulatory intervention. A critical weakness of any attempt to legislate or regulate security isthat specific measures may be outsmarted by new attack technologies quickly. However, the notion of ‘‘best practices’’could be developed on an ongoing, adaptive basis. This is, for example, the approach taken by the AustralianCommunication and Media Authority (ACMA) in the Australian Internet Security Initiative. In this collaborative effort, 59ISPs (as of February 28, 2009) and the regulatory agency share information to reduce the threat from botnets. The approachis not without critics, many of whom point to the lack of transparency in following up on security threat reports.Nonetheless, the ability of the regulatory authority to threaten with the imposition of formal regulation seems to haveboosted participation and the initiative is expanding steadily.

The Australian example raises the question of what role, if any, regulatory agencies should play. Historically, apart fromlaw enforcement, cybersecurity was overwhelmingly organized in voluntary and self-regulatory arrangements, givingregulatory agencies only ancillary jurisdiction (Lewis, 2005). For example, in the US, the Federal CommunicationsCommission was part of implementing the Communications Assistance for Law Enforcement Act (CALEA), 1994 legislationdefining the obligations of telecommunications and information service providers in assisting law enforcement. In someway, however, regulatory agencies are well-positioned to assume a stronger role in contributing to enhanced cybersecurity:they have jurisdiction over much of the key infrastructure used in providing information services which might be anefficient intervention point, typically have powers to compel information from providers that would enable finding betterpolicies, have administrative processes in place that could assist in developing policies, and have some experience withinternational collaboration (Lukasik, 2000). Regulatory agencies also have practice in designing financing models for thepursuit of public interest goals rather than imposing an unfunded mandate on stakeholders. On the other hand, they alsohave disadvantages: the administrative process is often slow and beholden to special interest groups and regulatorymeasures might be too rigid in formulating adaptive and dynamic responses (Garcia & Horowitz, 2007). However, thesechallenges might be overcome if adaptive regulatory tools are used. In any case, regulatory agencies are an interestingavenue to pursue cybersecurity but they will have to work in concert with other agencies.

An even more contested issue is the modification of existing liability rules. In principle, liability rules would facilitate adesirable evolutionary learning process. By creating enforceable rights and obligations, they would allow common and caselaw institutions to gradually develop a body of best practices and reshape incentives to reduce negative and strengthenpositive externalities. Compared to the status quo, such rules will create advantages for some players and disadvantages forothers. Most likely, such changes will activate losers that will attempt to block solutions even if the winners could havecompensated them. Changes in economic institutions and incentives could achieve similar outcomes. Among the possiblemeasures discussed in the literature are increased penalties for security flaws, the establishment of markets for

11 See B. Krebs, ‘‘Major source of online scams and spams knocked offline’’, Washington Post, November 11, 2008. Retrieved September 3, 2009 from

http://voices.washingtonpost.com/securityfix/2008/11/major_source_of_online_scams_a.html.

ARTICLE IN PRESS

J.M. Bauer, M.J.G. van Eeten / Telecommunications Policy 33 (2009) 706–719 717

vulnerabilities, insurance markets, and mandatory release of information about security violations (Rescorla, 2004; Choi,Fershtman, & Gandal, 2005; Shostack, 2005). All these proposals have some appealing features but most also havedownsides. For example, insurance markets may not work well because the risks of individual stakeholders, exposed to thesame threats, may be highly correlated. Releasing information may assist consumers in avoiding firms with low securitystandards but it may also trigger more devious attacks (see the more detailed discussion in Van Eeten & Bauer, 2008). At atechnical level, information security standards, mandated security testing, or peer-based information security approachesare promising avenues. Many nations are actively engaged in information sharing programs at the national level. Somehave established reporting requirements for security breaches. Early observations suggest that such measures work in favorof enhanced security. At least, they increase market transparency for consumers.

The effectiveness of the available broad spectrum of measures is not well established. Very few empirical studiesdocument the possible impact of specific measures. As it is unlikely that a comprehensive information base that willallow such judgment is available any time soon, measures will have to be adopted on an experimental, trial and error basis.Different options may complement each other (such as national and international legislation) whereas others may bepartial or full substitutes for each other. More research and practical policy experimentation is needed to move thisprocess along.

6. Conclusions

Information and communication technologies form a vast open ecosystem. This openness supports the innovativedynamics of the internet (Benkler, 2006) but it also renders infrastructure and services vulnerable to attack. Duringthe past decade, information security breaches have become increasingly motivated by fraudulent and criminalmotives and are threatening to undermine the open fabric of the internet (Zittrain, 2008). Because it benefits allparticipants in the ICT ecosystem, information security has strong public good characteristics. However, defenses againstthis relentless stream of attacks are predominantly the outcome of security decisions by private market and non-marketplayers. As security is costly, accepting some level of insecurity is economically rational. A critical question, which waspursued in this paper, is, therefore, whether these decentralized decisions result in a socially acceptable level ofinformation security.

The paper set out to pursue three interrelated aims. It first analyzed cybercrime and information security as twointerdependent markets. Second, in this framework, the incentives of key players in the ICT ecosystem (e.g., ISPs,e-commerce service providers, and different types of users) were examined in detail. The research found that all players inthe ICT ecosystem face incentives such as revenue effects, reputation effects, and social relations that increase the level ofsecurity. At the same time, due to physical network links, multi-layered business transactions, and social relations, some ofthe costs and benefits of security decisions are dispersed throughout the entire ecosystem. Individual decision-makerstherefore generate externalities, i.e. they do not perceive all relevant costs and benefits of their choices. Three typicalscenarios describe the extent and severity of externalities. In rare instances all social costs and benefits are internalized andno externalities are present. In a second scenario, a stakeholder in the value net, such as a financial institution or an ISP thathas a business relationship with the player that is the source of the externality (often end users) is able to fully or partiallycontrol it. In this case, the security externality is internalized at the sector level. In the third scenario, no such opportunitiesexist and costs are imposed on the whole ICT ecosystem and society at large.

Threats to cybersecurity therefore have two sources: a continuous stream of attacks from the underworld of cybercrimeand the interconnected and distributed nature of the ICT ecosystem. As decentralized decisions are afflicted withexternalities, some form of collective action, ranging from voluntary self-regulation to government-prodded co-regulationand forms of legislative and regulatory intervention may be the only way to overcome the problem. Thus, the paper,explored third alternative policy approaches. Such actions can use two complementary levers: measures to quenchcybercrime and measures to enhance information security. Both strategies will result in a lower level of attacks. Whereasthis may or may not increase the level of security, measures to reduce cybercrime will unequivocally reduce the direct andindirect cost of maintaining a given level of security. Measures to enhance information security range from market-basedinterventions such as the establishment of markets for vulnerabilities to regulatory measures. Other things equal, they willincrease the cost of cybercrime and therefore reduce the level of attacks. However, the relation between attackers anddefenders resembles an arms race so that the effects of security enhancements may only be temporary.

The complex nature of the problem suggests that a mix of policy instruments may currently be the best approach.Regulatory agencies are in some ways well-positioned to assume responsibility in the area of information security. Theyhave jurisdiction over large parts of the ICT infrastructure, are experienced in developing policies and funding models formeasures that might impose costs on certain players, and are increasingly collaborating at an international level. However,regulatory agencies will have to develop innovative, adaptive approaches as existing administrative inertia and thepotential rigidity of ex ante regulation may render them ineffective in addressing issues of cybercrime. Moreover,regulation alone will be insufficient, as it cannot effectively target the world of cybercrime. Concerted efforts across publicand private stakeholders, including legal measures, national and international law enforcement, cooperation in CERTs andCSIRTs, as well as technical measures at the level of the infrastructure, hardware, and software, will therefore be necessaryif the problem of cybersecurity shall be tackled.

ARTICLE IN PRESS

J.M. Bauer, M.J.G. van Eeten / Telecommunications Policy 33 (2009) 706–719718

Acknowledgements

Background research for this article was supported by funding from the OECD (Paris) and the Dutch Ministry ofEconomics (The Hague). Yuehua Wu and Tithi Chattopadhyay provided able research assistance during early stages of theproject. We also like to thank the more than 50 experts from industry and governance who generously contributed timeand knowledge during and after the field interviews. An earlier version of this paper was presented at WebSci09, Athens,Greece, March 18–20, 2009.

References

Anderson, R. (2001). Why information security is hard: An economic perspective. Paper presented at the annual computer security applications conference,New Orleans, LA. Retrieved September 3, 2009 from /http://www.acsac.org/2001/papers/110.pdfS (December 13).

Anderson, R., Bohme, R., Clayton, R., & Moore, T. (2008). Security economics and European policy. Cambridge: University of Cambridge Computer Laboratory.Anderson, R., & Moore, T. (2006). The economics of information security. Science, 314(5799), 610–613.Bauer, J. M., Van Eeten, M., Chattopadhyay, T., & Wu, Y. (2008). Financial implications of network security: Malware and spam. Report for the International

Telecommunication Union (ITU), Geneva, Switzerland.Becker, G. S. (1968). Crime and punishment: An economic approach. Journal of Political Economy, 76(2), 169–217.Benkler, Y. (2006). The wealth of networks: How social production transforms markets and freedom. New Haven, CT: Yale University Press.Bovel, D., & Martha, J. (2000). Value nets: Breaking the supply chain to unlock hidden profits. New York, NY: John Wiley.Brandenburger, A. M., & Nalebuff, B. J. (1996). Co-opetition. New York, NY: Doubleday.Camp, L. J. (2006). The state of economics of information security. I/S-A Journal of Law and Policy in the Information Society, 2. Retrieved September 3, 2009

from /http://www.is-journal.org/V02I02/2ISJLP189-Camp.pdfS.Camp, L. J. (2007). Economics of identity theft: Avoidance, causes and possible cures. New York, Berlin: Springer.Choi, J. P., Fershtman, C., & Gandal, N. (2005). Internet security, vulnerability disclosure, and software provision. Paper presented at the fourth workshop on

the economics of information security, Cambridge, MA. Retrieved September 3, 2009 from /http://infosecon.net/workshop/pdf/9.pdfS (June 2).CSI (2008). 2008 CSI computer crime and security survey. San Francisco, CA: Computer Security Institute.Ehrlich, I. (1996). Crime, punishment, and the market for offenses. Journal of Economic Perspectives, 10(1), 43–67.Franklin, J., Paxson, V., Perrig, A., & Savage, S. (2007). An inquiry into the nature and causes of the wealth of internet miscreants. Paper presented at the 14th

ACM conference on computer and communications security, Alexandria, VA. Retrieved September 3, 2009 from /http://www.icir.org/vern/papers/miscreant-wealth.ccs07.pdfS (October 31).

Fransman, M. J. (2007). The new ICT ecosystem: Implications for Europe. Edinburgh: Kokoro.Garcia, A., & Horowitz, B. (2007). The potential for underinvestment in internet security: Implications for regulatory policy. Journal of Regulatory

Economics, 31(1), 37–55.Gilsing, V. (2005). The dynamics of innovation and interfirm networks: Exploration, exploitation and co-evolution. Cheltenham, UK: Edward Elgar Publishing.Gordon, L. A., & Loeb, M. P. (2004). The economics of information security investment. In L. J. Camp, & S. Lewis (Eds.), Economics of information

security (pp. 105–128). Dordrecht: Kluwer Academic Publishers.House of Lords (2007). Personal internet security. Science and Technology Committee, 5th Report of Session 2006–07, London. Retrieved September 3, 2009

from /http://www.publications.parliament.uk/pa/ld/ldsctech.htmS.Illing, G., & Peitz, M. (Eds.). (2006). Industrial organization and the digital economy. Cambridge, MA MIT Press.Jakobsson, M., & Ramzan, Z. (Eds.). (2008). Crimeware: Understanding new attacks and defenses. Upper Saddle River, NJ: Addison-Wesley Professional.Kanich, C., Kreibnich, C., Levchenko, K., Enright, B., Voelker, G. M., Paxson, V., et al. (2008). Spamalytics: an empirical analysis of spam marketing conversion.

Paper presented at the 15th ACM conference on computer and communications security, Alexandria, VA. Retrieved September 3, 2009 from /http://www.icsi.berkeley.edu/pubs/networking/2008-ccs-spamalytics.pdfS (October 28).

Kshetri, N. (2006). The simple economics of cybercrimes. IEEE Security and Privacy, 4(1), 33–39.Kunreuther, H., & Heal, G. (2003). Interdependent security. Journal of Risk and Uncertainty, 26(2), 231–249.LaRose, R., Rifon, N., & Enbody, R. (2008). Promoting personal responsibility for internet safety. Communications of the ACM, 51(3), 71–76.Lewis, J. A. (2005). Aux armes, citoyens: Cyber security and regulation in the United States. Telecommunications Policy, 29(11), 821–830.Li, F., & Whalley, J. (2002). Deconstruction of the telecommunications industry: From value chains to value networks. Telecommunications Policy, 26(9–10),

451–472.Loder, T., Van Alstyne, M., & Wash, R. (2004). An economic answer to unsolicited communication. Paper presented at the 5th ACM conference on electronic

commerce, New York. Retrieved September 3, 2009 from /http://portal.acm.org/citation.cfm?id=988780S (May 17).Lukasik, S. J. (2000). Protecting the global information commons. Telecommunications Policy, 24(6–7), 519–531.OECD (2009). Computer viruses and other malicious software. Paris: Organisation for Economic Co-operation and Development.Murmann, J. P. (2003). Knowledge and competitive advantage: The co-evolution of firms, technology, and national institutions. Cambridge, UK: Cambridge

University Press.Nelson, R. R. (1994). The co-evolution of technology, industrial structure, and supporting institutions. Industrial and Corporate Change, 3(1), 47–63.Peppard, J., & Rylander, A. (2006). From value chain to value network: Insights for mobile operators. European Management Journal, 24(2–3), 128–141.Perrow, C. (2007). The next catastrophe: Reducing our vulnerabilities to natural, industrial, and terrorist disasters. Princeton, NJ: Princeton University Press.Png, I. P. L., Wang, C.-H., & Wang, Q.-H. (2008). The deterrent and displacement effects of information security enforcement: International evidence.

Journal of Management Information Systems, 25(2), 125–144.Posner, R. A. (2004). Catastrophe: Risk and response. New York: Oxford University Press.Rescorla, E. (2004). Is finding security holes a good idea? Paper presented at the third workshop on the economics of information security, Minneapolis, MN.

Retrieved September 3, 2009 from /http://www.rtfm.com/bugrate.pdfS (May 13).Schipka, M. (2007). The online shadow economy: A billion dollar market for malware authors. White Paper. New York, NY: MessageLabs Ltd. Retrieved

September 3, 2009 from /http://www.fstc.org/docs/articles/messaglabs_online_shadow_economy.pdfS.Schneier, B. (2004). Secrets and lies: Digital security in a networked society. New York, NY: Wiley.Shapiro, C., & Varian, H. R. (1999). Information rules: A strategic guide to the network economy. Boston, MA: Harvard Business School Press.Shostack, A. (2005). Avoiding liability: An alternative route to more secure products. Paper presented at the fourth workshop on the economics of information

security, Cambridge, MA. Retrieved on September 3, 2009 from /http://infosecon.net/workshop/pdf/44.pdfS (June 3).Symantec (2009). The state of spam. Report #32, August 2009. Retrieved on September 3, 2009 from /http://eval.symantec.com/mktginfo/enterprise/

other_resources/b-state_of_spam_report_08-2009.en-us.pdfS.Taleb, N. N. (2007). The black swan: The impact of the highly improbable. New York: Random House.Van Eeten, M., & Bauer, J. M. (2008). The economics of malware: Security decisions, incentives and externalities. Directorate for Science, Technology and

Industry, Committee for Information, Computer and Communications Policy, DSTI/ICCP/REG(2007)27. Paris: OECD.Van Eeten, M., Bauer, J. M., & Tabatabaie, S. (2009). Damages from internet security incidents: A framework and toolkit for assessing the economic costs of

security breaches. Report for OPTA. Delft: Delft University of Technology.

ARTICLE IN PRESS

J.M. Bauer, M.J.G. van Eeten / Telecommunications Policy 33 (2009) 706–719 719

Varian, H., Farrell, J., & Shapiro, C. (2005). The economics of information technology: An introduction. Cambridge, UK: Cambridge University Press.Wang, Q.-H., & Kim, S.-H. (2009). Cyber-attacks: Cross-country interdependencies and enforcement. Paper presented at the eight workshop on the economics

of information security, London. Retrieved September 3, 2009 from /http://weis09.infosecon.net/files/153/paper153.pdfS (June 24).White House (2009). Cyberspace policy review: Assuring a trusted and resilient information and communications infrastructure. Washington, DC: The White

House Retrieved September 3, 2009 from. /http://www.whitehouse.gov/assets/documents/Cyberspace_Policy_Review_final.pdfS.Zittrain, J. (2008). The future of the internet and how to stop it. New Haven, CT: Yale University Press.