[ieee 2011 ieee international conference on pervasive computing and communications workshops (percom...

6
Analyzing the Incentives in Community-based Security Systems Pern Hui Chia Centre for Quantifiable Quality of Service in Communication Systems (Q2S) Norwegian University of Science and Technology (NTNU) [email protected] Abstract—Apart from mechanisms to make crowd-sourcing secure, the reliability of a collaborative system is dependent on the economic incentives of its potential contributors. We study several factors related to the incentives in a community-based security system, including the expectation on the social influ- ence and the contagion effect of generosity. We also investigate the effects of organizing community members differently in a complete, random and scale-free structure. Our simulation results show that, without considering any specific incentive schemes, it is not easy to encourage user contribution in a complete-graph community structure (global systems). On the other hand, a moderate level of cooperative behavior can be cultivated when the community members are organized in the random or scale-free structure (social networks). Keywords-Incentives, Collaborative Security, Game Theory I. I NTRODUCTION Despite the popularity of reputation and recommender systems, relying on community effort for security purposes is still a new concept to many. PhishTank [1] and Web Of Trust (WOT) [2] are two of the few systems that employ user inputs to improve web security. PhishTank relies on user reporting and voting against suspected phishes, while WOT collates user ratings on several trust and security as- pects of websites. Researchers have looked at the reliability of Community-based Security Systems (CSS). Moore and Clayton [3] argued that as participation in PhishTank follows a power-law distribution, its outputs are particularly suscep- tible to manipulation by the few highly active contributors. Indeed, besides system design to prevent manipulation by dishonest users or attackers, the reliability of a CSS is highly dependent on the incentives of its potential contributors. When many users contribute actively, diverse user inputs serve to cancel out the errors made by individuals and scrutinize against manipulative attempts. On the other hand, when many users do not contribute, the outcomes can be biased towards the judgment of a few, or be manipulated easily. In the reverse manner of the tragedy of the commons [4] which depicts the situation whereby individuals consume the common resource irresponsibly, motivating active partic- ipation in a CSS is a problem of public goods provisioning such that individuals face a cost that discourages them from contributing to the common protection. In this work, we look at several factors related to the incentives in a CSS. We adapt the normalized total-effort security game in [5][6] to depict the scenario of collaborative security protection, but our model takes into account of the long term user consideration using the framework of infinitely repeated games. We first describe the basic model in Section II and extend it with the expectation on social in- fluence in Section III. We study the effects of user dynamics and a possible contagion effect of generosity in Section IV. We find that it is easier to encourage a moderate level of user contribution in a random or scale-free community structure than in the complete-graph structure of global systems in Section V. II. BASIC MODEL &ANALYSIS Imagine a community-based system that collates evalua- tion reports from its members on some trust or security as- pects of websites. Inputs from all members are important as they serve to diversify the inputs and cancel out errors made by individuals, in addition to enabling a level of scrutiny on each other’s inputs. Without a sufficient level of contribution, a CSS will be deemed unreliable and abandoned by its members. An equally undesired scenario arises when there is a highly skewed participation ratio such that the system can be completely undermined when the few highly active users become corrupted or stop participating [3]. A. An Infinitely Repeated Total-effort Security Game We use a n-player repeated game to model a CSS consisting of N rational members. We first consider a complete-graph community structure, as in global systems like PhishTank and WOT, where the N = n members collaborates at all time (round) t for common protection. Each game round evaluates a different website (target). We assume that all members value the benefit of pro- tection b equally and have the same cost of contribution c. Assuming also that the inputs from all members are equally important, we formulate the homogenous utility U i,t received by member i at game round t to be linearly dependent on the ratio of contribution by the n collaborating members, following the notion of normalized total-effort security in [5][6], as follows: U i,t = j a j,t n b - ca i,t (1) with a i,t denoting the binary action of either {1: contribute, 0: do not contribute} by member i at game round t. 3rd International Workshop on Security and Social Networking 978-1-61284-937-9/11/$26.00 ©2011 IEEE 270

Upload: pern-hui

Post on 11-Apr-2017

218 views

Category:

Documents


0 download

TRANSCRIPT

Analyzing the Incentives in Community-based Security Systems

Pern Hui ChiaCentre for Quantifiable Quality of Service in Communication Systems (Q2S)

Norwegian University of Science and Technology (NTNU)[email protected]

Abstract—Apart from mechanisms to make crowd-sourcingsecure, the reliability of a collaborative system is dependent onthe economic incentives of its potential contributors. We studyseveral factors related to the incentives in a community-basedsecurity system, including the expectation on the social influ-ence and the contagion effect of generosity. We also investigatethe effects of organizing community members differently ina complete, random and scale-free structure. Our simulationresults show that, without considering any specific incentiveschemes, it is not easy to encourage user contribution in acomplete-graph community structure (global systems). On theother hand, a moderate level of cooperative behavior can becultivated when the community members are organized in therandom or scale-free structure (social networks).

Keywords-Incentives, Collaborative Security, Game Theory

I. INTRODUCTION

Despite the popularity of reputation and recommendersystems, relying on community effort for security purposesis still a new concept to many. PhishTank [1] and Web OfTrust (WOT) [2] are two of the few systems that employuser inputs to improve web security. PhishTank relies onuser reporting and voting against suspected phishes, whileWOT collates user ratings on several trust and security as-pects of websites. Researchers have looked at the reliabilityof Community-based Security Systems (CSS). Moore andClayton [3] argued that as participation in PhishTank followsa power-law distribution, its outputs are particularly suscep-tible to manipulation by the few highly active contributors.

Indeed, besides system design to prevent manipulation bydishonest users or attackers, the reliability of a CSS is highlydependent on the incentives of its potential contributors.When many users contribute actively, diverse user inputsserve to cancel out the errors made by individuals andscrutinize against manipulative attempts. On the other hand,when many users do not contribute, the outcomes can bebiased towards the judgment of a few, or be manipulatedeasily. In the reverse manner of the tragedy of the commons[4] which depicts the situation whereby individuals consumethe common resource irresponsibly, motivating active partic-ipation in a CSS is a problem of public goods provisioningsuch that individuals face a cost that discourages them fromcontributing to the common protection.

In this work, we look at several factors related to theincentives in a CSS. We adapt the normalized total-effortsecurity game in [5][6] to depict the scenario of collaborative

security protection, but our model takes into account ofthe long term user consideration using the framework ofinfinitely repeated games. We first describe the basic modelin Section II and extend it with the expectation on social in-fluence in Section III. We study the effects of user dynamicsand a possible contagion effect of generosity in Section IV.We find that it is easier to encourage a moderate level of usercontribution in a random or scale-free community structurethan in the complete-graph structure of global systems inSection V.

II. BASIC MODEL & ANALYSIS

Imagine a community-based system that collates evalua-tion reports from its members on some trust or security as-pects of websites. Inputs from all members are important asthey serve to diversify the inputs and cancel out errors madeby individuals, in addition to enabling a level of scrutiny oneach other’s inputs. Without a sufficient level of contribution,a CSS will be deemed unreliable and abandoned by itsmembers. An equally undesired scenario arises when thereis a highly skewed participation ratio such that the systemcan be completely undermined when the few highly activeusers become corrupted or stop participating [3].

A. An Infinitely Repeated Total-effort Security Game

We use a n-player repeated game to model a CSSconsisting of N rational members. We first consider acomplete-graph community structure, as in global systemslike PhishTank and WOT, where the N = n memberscollaborates at all time (round) t for common protection.Each game round evaluates a different website (target).

We assume that all members value the benefit of pro-tection b equally and have the same cost of contributionc. Assuming also that the inputs from all members areequally important, we formulate the homogenous utilityUi,t received by member i at game round t to be linearlydependent on the ratio of contribution by the n collaboratingmembers, following the notion of normalized total-effortsecurity in [5][6], as follows:

Ui,t =

∑j aj,t

nb− cai,t (1)

with ai,t denoting the binary action of either {1: contribute,0: do not contribute} by member i at game round t.

3rd International Workshop on Security and Social Networking

978-1-61284-937-9/11/$26.00 ©2011 IEEE 270

When all members contribute, each of them receives autility of b − c. If no one but only member i contributes,his utility is b/n − c. Assume b > c > b/n such thatcontributing to a CSS is the case of n-person prisoner’sdilemma. We further assume an infinitely repeated gameto depict the expectation by individual members that thesystem will evaluate infinitely many websites and exist untilan unforeseen future. If the system is known to last only fora finite amount of time, not-contributing at all game roundswill be the (sub-game perfect) equilibrium strategy.

We consider that individual members rank their infinitepayoff stream using the δ-discounted average criterion. Thediscount factor δi characterizes how a player weighs hispayoff in the current round compared to future payoffs.In the context of this paper, it can be interpreted as howa member perceives the long term importance of commonprotection and his relationship with the n − 1 interactingpeers. We assume that δi is heterogeneous. A short-sightedmember has δi ≈ 0, while a player who values the long-termbenefit of the CSS system or who cares about the long-termrelationship with other members, has δi ≈ 1. Let 0 ≤ δi < 1,the δ-discounted average payoff of member i is given by:

(1− δi)∞∑t=1

(δi)t−1Ut (2)

Analyzing the equilibrium behaviors of the n > 2 commu-nity members is more complicated that a 2-player repeatedgame. A trivial setting is when all members are assumed toemploy a n-player ‘grim trigger’ strategy which considerseach member to be contributing initially, but threatens tostop contributing forever if he realizes that any of his n peershas not contributed in the previous round. Given this largestthreat of punishment, an equilibrium whereby all memberswill always contribute can be achieved, if ∀i:

δi >cn− bbn− b

(3)

A simple relationship between the cost c and benefit bcan now be established. If c is close to b, the requiredδi approaches 1 as n increases. This reflects the real-lifescenario that user inputs are hard to obtain if the contributioncost is large relative to the benefit of protection.

A challenge in a repeated n-player game is that one cannotidentify and punish those who have not contributed without acentralized monitoring mechanism, which can be expensiveto build or threaten the anonymity of contributors. A playercan only work out the contribution ratio of the others inthe previous round rt−1 based on the payoff he receives.The n-player ‘grim trigger’ strategy inefficiently punisheseveryone, making it to be an unrealistic strategy.

III. THE EXPECTATION ON SOCIAL INFLUENCE

Rather than using the ‘grim trigger’ strategy, we adapt anidea from [7] such that the community members reason for

their respective choice of action, not only depending on thepast but also on their expectation as to how their actions caninfluence the choice of others in the subsequent rounds. Weconstruct two simple influence rules, as follows:

Linear influence. A player believes that his contributionwill have a positive influence on his peers and increasetheir contribution ratio by γ in the subsequent game rounds.Similarly, he expects that their contribution ratio will dropby −γ if he does not contribute. The rule can be written as:

r̂γ = min(max(rt−1 + γ, 0), 1) (4)

Sigmoid influence. Same as linear influence. However,the contribution ratio in the subsequent rounds is updatedfollowing a sigmoid curve. Specifically, a player reasons thathis action will have a reduced influence on others when thecurrent contribution level is close to the extremes, 0 or 1:

r̂γ =1

1 + ew(rt−1+γ− 12 )

(5)

With the above expectation, a member i will contributeat time t, if:

Vc(rt−1)−Vn(rt−1)+δi

1− δi[Vc(r̂γ)− Vn(r̂−γ)] > 0 (6)

with Vc and Vn denoting the utilities of contributing andnot-contributing respectively, based on the last observedcontribution level rt−1 and the expected future contributionratio r̂. This assumes that a member believes that if heis to contribute now, since his action will cause a positiveinfluence on the others, he will be better off to contributealso in the future. The same reasoning applies for the case ofnot-contributing. These simple rules (4), (5) and (6) modelthe bounded rationality of users. Indeed, it is non-trivial toreason for the best-responses in an interconnected structure.Each member is not aware of the discounting factor ofothers. One also may not know about his peers’ peers andhow they perform in their respective games.

A. Simulation Results

We consider several levels of expectation on how muchan action can influence of the action of others, as shownin Table I. Note that ±γ depicts the expectation thatcontributing/not has a positive/negative influence on peers’contribution ratio. The appendix s denotes the use of sigmoidinfluence rule. Each expectation level is simulated 50 timesfor computing the mean payoff. In each simulation run,every member is assigned a new discounting factor drawnuniformly between a minimum value δmin and 1.

Figure 1 shows the simulation results. With γ = −1.0,which is the equivalent of the ‘grim trigger’ strategy, fullcontribution to give the maximum payoff of b− c = 1 is astable outcome when all members have a discounting factorhigher than a moderate threshold ≈ 0.5. This is shown bythe dotdashed line in Figure 1. However, as aforementioned,the ‘grim-trigger’ strategy is not realistic in practice.

271

Table ISIMULATING USERS’ EXPECTATION ON SOCIAL INFLUENCE

Var. Description Value(s)N Total community members 100c Contribution cost 1b Benefit of full protection 2w Steepness of the sigmoid function 10γ Expected social influence on ±0.25,±0.33s,

the contribution ratio of peers −0.50,−1.0

0.0 0.2 0.4 0.6 0.8 1.0dmin

0.2

0.4

0.6

0.8

1.0U

C g:≤0.25

C g:≤0.33s

C g:-0.5

C g:-1.0

Figure 1. Mean payoff in a Complete-graph (C) community structure

With γ = −0.5, that is when the members expect thatnon-contribution will influence half of his peers to also stopcontributing, a fully cooperative equilibrium can only beachieved if all members place a large weight on the benefitof long term protection, as shown by the dotted line in Figure1. As the community members expect that their respectiveaction will have reduced influence on others’ motivation(e.g., γ = ±0.33s,±0.25), we find that the average payoff Uremains at zero even as δmin approaches 1. In other words,no community members contribute in equilibrium.

The above has several implications. First, the results showthat, without the help of any incentive schemes, the levelof user contribution for a CSS can be expected to be verylow. Centralized mechanisms such as monitoring, reputationand micro-payment may help to encourage contribution,but there can be challenges (including cost and anonymityconcerns) in implementing them in practice. The results alsohighlight the role of education to cultivate a sense of socialresponsibility (i.e., to increase the users’ perception of γ)and to inform about the long-term importance and benefitof collaborative security (i.e., to increase the δi of users).This is also challenging as it is well-known that ordinaryusers do not regard security as their primary concern [8].

IV. THE EFFECTS OF USER DYNAMICS & GENEROSITY

We investigate if a cooperative spirit can be cultivatedgiven the presence of a small fraction of members θ ≥ 0 whowould contribute to the common protection unconditionally.These ‘nice’ users can be thought as those who are extremelygenerous in real life, or those who have been employed toensure a minimal level of contribution in the system.

We also factor in a simple user dynamics in the CSS. Ineach game round, a fraction m ≥ 0 of under-performing

Table IISIMULATING COMMUNITY DYNAMICS AND NICE USERS

Var. Description Value(s)θ Fraction of nice users per game round 0.04, 0.08m Fraction of user leaving and joining per round 0.01τ Transient rounds before user dynamics 5

0.0 0.2 0.4 0.6 0.8 1.0dmin

0.2

0.4

0.6

0.8

1.0U

C g:-0.5C g:≤0.33sC g:≤0.25

Figure 2. Mean payoff in a Complete-graph (C) community structure,with a fraction of ‘nice’ users θ = 0.04 and dynamics of m = 0.01.

users (i.e. those with average payoffs ≤ 0 and who havebeen through a minimum number of transient rounds τ ) areprogrammed to leave. When a user leaves, we assume thatanother user joins and connects with the remaining members.This models the real life scenario where frustrated usersleave, while new users join the community continuously.

A. Simulation Results

Table II summarizes the variables and values used in oursimulation to study the effect of community dynamics and‘nice’ users. As before, each scenario is repeated with 50simulation runs for computing the mean value. Without userdynamics, as in Section II, we observe that the contributionratio and average payoff settle quickly after several gamerounds. With user dynamics, these values fluctuate, but onlyslightly as one member may leave (while another joins) pergame round. Considering this, in each simulation run, wemeasure the average payoff of all community members onlyafter t = 250 game rounds.

Figure 2 plots the average payoff of the communitymembers when a small fraction of ‘nice’ users (θ = 0.04)and dynamics (m = 0.01) are considered. In every gameround, the worst performing member is programmed toleave, while 4 members are randomly selected to contributeunconditionally. As shown in the figure, the presence of 4‘nice’ members does help to increase the contribution level(average payoff) slightly from zero as seen in Figure 1,to about 0.15 in Figure 2. This shows the role played bygenerous members or those who have been employed, inencouraging the others to contribute.

However, notice that other than the case with γ = −0.5,the average payoff is flat even as the minimum discountingfactor δmin approaches 1. This hints on the limited impactof the ‘nice’ users in a global system (complete-graphcommunity structure) after all. It also highlights the risk

272

of over-reliance on a small group of ‘nice’ users, causing ahighly disproportionate contribution ratio in the system. Ahighly skewed contribution pattern can harm the Byzantinefault tolerance that community-based systems sought for,making it susceptible to manipulation by the few highlyactive users [3].

V. THE EFFECTS OF COMMUNITY STRUCTURE

Thus far, we have studied only the complete-graph com-munity structure, used in global systems such as PhishTankand WOT. We consider two other structures, mimicking thetopology of social networks, in the following.

Random (R). First, we consider a random network whichconnects any two members with a probability p. To ensurethat all members will engage in a multi-player game, werequire each user to have a degree k ≥ ψ initially. To builda random network, we first instantiate N members in thesystem. Then, for each member, we randomly assign ψ peersselected from the N − 1 candidates with a uniform proba-bility. In every round of the repeated game, a member willplay an n-person game with his k peers, where n = k + 1.Note that a peer to this member will, on the other hand, playa n′-person game, where n′ = k′ + 1, with her own peers,including this member.

Scale-free (S). We also study a scale-free communitystructure. Many real-life networks such as protein interac-tions, interlinked web pages and citation patterns, have beenshown to exhibit the scale-free property [9][10] where theprobability of a vertex having a degree k follows a powerlaw distribution: P (k) ∼ k−λ. Two important processes fora scale-free network are growth and preferential selection.Same as the random network, we require also each memberto have at least a degree k ≥ ψ initially. We build up a scale-free network by first initializing ψ + 1 users that are fullyconnected. Then, for every subsequent member that newlyarrives in the community, we connect him to ψ peers, witheach candidate j being selected with a probability equalsthe candidate’s degree over the sum of degree of all existingmembers, P (selectj) = kj/Σk. Like in the case of randomnetwork, in every game round, each member i plays a ni-person game with their respective peers, where ni = ki + 1.

Note that all members remain interconnected in the scale-free and random networks. Both structures can be used tomodel a ‘personalized’ system whereby individual membersinteract only with peers in their personalized community. Anexample of a ‘personalized’ system for security is NetTrust[11] which advocates the use of ratings from personalizedsources to provide reliable risk signals against suspiciouswebsites to individual users.

A. Simulation Results

As before, each simulation scenario is repeated with 50runs to compute the mean value. In each simulation run,the community structures are reconstructed and we measure

the average payoff of all members only after t = 250 gamerounds. Simulation values in Table I and II are used. Inevery game round, the 4 most connected members (i.e., whohave the highest degree k) are selected to be ‘nice’. Userdynamics is also considered as before. The worst performinguser is programmed to leave while a new user joins the com-munity and establishes links with other members, followingthe characteristics of random or scale-free network.

The upper left and right of Figure 3 plot the averagepayoff of players in the Random (R) and Scale-free (S)networks respectively. We see that the average payoff in theRandom (R) and Scale-free (S) networks increases to about0.6 and 0.4 when players believe that their actions can have apositive/negative influence of ±0.25 and ±0.33 respectively.This is a significant improvement compared to the casein a Complete-graph (C) network (as shown in Figure 2).Being ‘nice’ in the scale-free and random networks seemsto have a contagious effect as δmin increases. A moderatelevel of cooperative behavior can emerge in a random orscale-free network, rather than in the conventional complete-graph structure. The results for the random and scale-free networks seem similar most likely due to the limitedstructural variation given a relatively small N = 100.

When δmin is low or moderate, the presence of ‘nice’users help to ensure a minimum level of contribution in thesystem. Suppose that users adjust their perception on theimportance and benefit of collaborative security (i.e., theirδi) based on their initial payoff, the presence of some ‘nice’users in the early phase of the system will thus be crucial.

The bottom left and right of Figure 3 show the averagepayoff when each player is connected to at least 8 peers(k ≥ ψ = 8) at the start of simulation, in the random(R8) and scale-free (S8) structures respectively. We observea maximum number of 30 peers in R8, and 64 in S8. Com-pared to the performance when ψ = 4, the graphs are steeperbut still converge to a moderate level of average payoff,when using the linear ±0.25 and sigmoid ±0.33 influencerules. The condition for a moderate level of contribution isharder but remains achievable in the scale-free and randomnetworks, as the minimum number of peers increases.

VI. RELATED WORK & DISCUSSION

Based on the initial work by Hirshleifer [12], Varianstudied the problem of free-riding and system reliabilityof three different security games: total-effort, best-shot andweakest-link [13]. Grossklags et.al. extended the modelsto include the possibility to self-insure, and studied sev-eral interesting aspects such as the effects of informationasymmetry between naı̈ve users and security experts [5][6].Among other findings, the problem of under provisioningwas shown to worsen as the number of players increases.This is an unfavorable finding for collaborative security sys-tems in general; however, the aforementioned works have notfactored in the players’ long-term consideration. We reason

273

Random network (ψ=4)

0.0 0.2 0.4 0.6 0.8 1.0d

0.2

0.4

0.6

0.8

1.0

R g:-0.5R g:≤0.33sR g:≤0.25

Scale-free network (ψ=4)

0.0 0.2 0.4 0.6 0.8 1.0dmin

0.2

0.4

0.6

0.8

1.0

S g:-0.5S g:≤0.33sS g:≤0.25

Random network (ψ=8)

0.0 0.2 0.4 0.6 0.8 1.0

0.2

0.4

0.6

0.8

1.0

R8 g:-0.5R8 g:≤0.33sR8 g:≤0.25

Scale-free network (ψ=8)

0.0 0.2 0.4 0.6 0.8 1.0dmin

0.2

0.4

0.6

0.8

1.0

S8 g:-0.5S8 g:≤0.33sS8 g:≤0.25

Figure 3. Mean payoff when considering the Random (R) and Scale-free (S) community structures. Note that the level of user dynamics is m = 0.01while the fraction of ‘nice’ users is θ = 0.04. ψ denotes the minimum number of peers of each community member at the start of each simulation run.

that a CSS can be modeled using an infinitely repeated game,and the value of common protection being determined by thenormalized sum of effort given by individual members andtheir peers.

Kandori showed that cooperative behaviors can be sustain-able when players place sufficient weights on future payoffsthrough [14] in which players were repeatedly matched intopairs to play the classical 2-player prisoner’s dilemma. Inthe P2P domain, Feldman et.al. proposed a reciprocativedecision function along with a set of mechanisms (e.g.,discriminatory server selection, maxflow-based subjectivereputation and adaptive stranger policy) to mitigate theproblems of free-riding and white-washing [15]. An in-sightful analytical model was also devised [16]. Yet, thesefindings cannot be directly generalized for a community-based system where there are n > 2 players interactingsimultaneously. One cannot pin-point the non-contributors inthe multi-player game so to play a reciprocal strategy, suchas the well known Tit-For-Tat [17]. The threat of punishmenton free-riders is diffused due to the implicit anonymity in amulti-player game [18].

There is a wealth of literature that studied cooperativebehaviors in biological systems using the evolutionary game.Pioneered by [19], researchers started to look at the effectsof spatial structures on the evolution of cooperation. Acooperative behavior was found to be the dominant trait in ascale-free network [20]. Ohtsuki et.al. [21] later showed thata cooperative action is preferred in various structures (circle,lattice, random, regular and scale-free graphs) whenever thebenefit divided by the cost of contribution is greater than theaverage degree of individual members, i.e., b/c > k. How-ever, these studies were also conducted using the classical

2-player prisoner’s dilemma. Specifically, we note that thedilemma for contributing to the common protection in a n-player setting diminishes if b/c > n, since every memberwill strictly prefer to contribute or cooperate.

A few studies have looked at iterated multi-player games.Seo et.al. [22] analyzed the impact of a local opponent-poolfrom where a fixed amount of players (n = 4, 8, 16) wereselected to play the n-player prisoner’s dilemma, in everyiteration of the population evolution. They found that thesmaller the local opponent-pool size, the easier it was forcooperation to emerge. Rezaei et.al. [23] studied the co-evolution of cooperation and network structure using theiterated n-player prisoner’s dilemma with 2 ≤ n ≤ 10.Our work is different from theirs in two ways. First, weuse the framework of a repeated game with discountedfuture payoffs (similar to in [24] which models the dishonestbehavior of multi-cast agents in network overlays) insteadof an evolutionary game. Second, we do not fix the numberof players; each player engages in each game round withall his peers. The number of peers per individual membersvaries depending on the underlying community structure.

VII. CONCLUSION REMARKS

Starting with a complete-graph which depicts the com-munity structure of global systems such as PhishTank andWOT, our analysis show that, without any incentive schemes,the level of user contribution can be expected to be verylow. Education may help to inform about the importanceof common protection and to cultivate a sense of socialresponsibility; however, this is challenging as users do notperceive security as their primary concern. The presence ofgenerous users in a complete-graph community only helps

274

in a very limited way to encourage contribution. Meanwhile,over-reliance on a small group of generous users may resultin highly skewed contribution pattern, increasing the risk ofmanipulation.

Yet, this should not shun the idea of a community-based security system immediately. Our analysis has notfactored in potential incentive schemes such as reputation,micro-payment and punishment mechanisms that have beenproposed in the field of P2P networks. In addition, our simu-lation results show that it is possible to encourage a moderatelevel of cooperative behavior in the random and scale-freecommunity structures. Designing a robust incentive schemeand a reliable aggregation strategy to collate user inputsfrom social networks for security purposes remains as aninteresting research area.

Future work. Our current analysis can be extended inseveral directions. First, we note that it may be inter-esting to model a community-based security system as a‘best-k-effort’ game (adapting from the best-shot game in[5][6][12][13]), if it would suffice to have k out of n inputsfrom the community members for full protection. It wouldbe also interesting to consider an endogenous discounting-factor such that individuals update their perception towardsthe long term importance the common protection, based onthe payoff stream they receive. Extension work to considerheterogeneous contribution cost and protection benefit mayalso yield interesting insights.

REFERENCES

[1] PhishTank, http://www.phishtank.com.

[2] Web Of Trust (WOT), http://www.mywot.com.

[3] T. Moore and R. Clayton, “Evaluating the wisdom of crowdsin assessing phishing websites,” in FC ’08: Proceedings ofthe 12th International Conference on Financial Cryptographyand Data Security. Springer, 2008.

[4] G. Hardin, “The tragedy of the commons,” Science, vol. 162,pp. 1243–47, 1968.

[5] J. Grossklags, N. Christin, and J. Chuang, “Secure or insure?A game-theoretic analysis of information security games,” inWWW ’08: Proceedings of the 17th International Conferenceon World Wide Web. ACM, 2008.

[6] J. Grossklags, B. Johnson, and N. Christin, “When informa-tion improves information security,” in FC ’10: Proceedingsof the 14th International Conference on Financial Cryptog-raphy and Data Security. Springer, Jan 2010.

[7] N. S. Glance and B. A. Huberman, “The outbreak of cooper-ation,” The Journal of Mathematical Sociology, vol. 17, no. 2,pp. 281–302, 1993.

[8] P. Dourish, R. E. Grinter, J. Delgado de la Flor, andM. Joseph, “Security in the wild: user strategies for managingsecurity as an everyday, practical problem,” Personal andUbiquitous Computing, vol. 8, no. 6, pp. 391–401, Nov 2004.

[9] A.-L. Barabasi and R. Albert, “Emergence of scaling inrandom networks,” vol. 286, pp. 509–512, Oct 1999.

[10] A.-L. Barabasi, “Scale-Free Networks: A Decade and Be-yond,” Science, vol. 325, no. 5939, pp. 412–413, Jul 2009.

[11] L. J. Camp, “Reliable usable signaling to defeat masqueradeattacks,” in WEIS ’06: Proceedings of the 5th Workshop onthe Economics of Information Security, 2006.

[12] J. Hirshleifer, “From weakest-link to best-shot: The voluntaryprovision of public goods,” Public Choice, vol. 41, no. 3, pp.371–386, 1983.

[13] H. Varian, “System reliability and free riding,” in Economicsof Info. Security, ser. Advances in Info. Security, L. Campand S. Lewis, Eds. Springer, 2004, vol. 12, pp. 1–15.

[14] M. Kandori, “Social norms and community enforcement,” TheReview of Economic Studies, vol. 59, no. 1, pp. 63–80, 1992.

[15] M. Feldman, K. Lai, I. Stoica, and J. Chuang, “Robustincentive techniques for peer-to-peer networks,” in EC ’04:Proceedings of the 5th ACM conference on Electronic com-merce. ACM, 2004.

[16] M. Feldman, C. Papadimitriou, J. Chuang, and I. Sto-ica, “Free-riding and whitewashing in peer-to-peer systems,”IEEE Journal on Selected Areas in Communications, vol. 24,no. 5, pp. 1010–1019, May 2006.

[17] R. M. Axelrod, The evolution of cooperation. Basic Books,1984.

[18] P. Kollock, “Social dilemmas: The anatomy of cooperation,”Annual Review of Sociology, vol. 24, no. 1, pp. 183–214,1998.

[19] M. A. Nowak and R. M. May, “Evolutionary games andspatial chaos,” Nature, vol. 359, no. 6398, pp. 826–829, Oct1992.

[20] F. C. Santos and J. M. Pacheco, “Scale-free networks providea unifying framework for the emergence of cooperation,”Phys. Rev. Lett., vol. 95, p. 098104, 2005.

[21] H. Ohtsuki, C. Hauert, E. Lieberman, and M. A. Nowak, “Asimple rule for the evolution of cooperation on graphs andsocial networks,” Nature, vol. 441, no. 7092, pp. 502–505,May 2006.

[22] Y.-G. Seo, S.-B. Cho, and X. Yao, “The impact of payofffunction and local interaction on the n-player iterated pris-oner’s dilemma,” Knowledge and Information Systems, vol. 2,pp. 461–478, 2000.

[23] G. Rezaei, M. Kirley, and J. Pfau, “Evolving cooperation inthe n-player prisoner’s dilemma: A social network model,” inACAL ’09: Proceedings of the 4th Australian Conference onArtificial Life: Borrowing from Biology. Springer, 2009.

[24] M. Afergan and R. Sami, “Repeated-game modeling of mul-ticast overlays,” in INFOCOM ’06: Proceedings of the 25thIEEE Intl. Conference on Computer Communications, 2006.

275