[IEEE 2011 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops) - Seattle, WA, USA (2011.03.21-2011.03.25)] 2011 IEEE International Conference on Pervasive Computing and Communications Workshops (PERCOM Workshops) - Analyzing the incentives in Community-based Security Systems

Download [IEEE 2011 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops) - Seattle, WA, USA (2011.03.21-2011.03.25)] 2011 IEEE International Conference on Pervasive Computing and Communications Workshops (PERCOM Workshops) - Analyzing the incentives in Community-based Security Systems

Post on 11-Apr-2017




0 download


  • Analyzing the Incentives in Community-based Security Systems

    Pern Hui ChiaCentre for Quantifiable Quality of Service in Communication Systems (Q2S)

    Norwegian University of Science and Technology (NTNU)chia@q2s.ntnu.no

    AbstractApart from mechanisms to make crowd-sourcingsecure, the reliability of a collaborative system is dependent onthe economic incentives of its potential contributors. We studyseveral factors related to the incentives in a community-basedsecurity system, including the expectation on the social influ-ence and the contagion effect of generosity. We also investigatethe effects of organizing community members differently ina complete, random and scale-free structure. Our simulationresults show that, without considering any specific incentiveschemes, it is not easy to encourage user contribution in acomplete-graph community structure (global systems). On theother hand, a moderate level of cooperative behavior can becultivated when the community members are organized in therandom or scale-free structure (social networks).

    Keywords-Incentives, Collaborative Security, Game Theory

    I. INTRODUCTIONDespite the popularity of reputation and recommender

    systems, relying on community effort for security purposesis still a new concept to many. PhishTank [1] and Web OfTrust (WOT) [2] are two of the few systems that employuser inputs to improve web security. PhishTank relies onuser reporting and voting against suspected phishes, whileWOT collates user ratings on several trust and security as-pects of websites. Researchers have looked at the reliabilityof Community-based Security Systems (CSS). Moore andClayton [3] argued that as participation in PhishTank followsa power-law distribution, its outputs are particularly suscep-tible to manipulation by the few highly active contributors.

    Indeed, besides system design to prevent manipulation bydishonest users or attackers, the reliability of a CSS is highlydependent on the incentives of its potential contributors.When many users contribute actively, diverse user inputsserve to cancel out the errors made by individuals andscrutinize against manipulative attempts. On the other hand,when many users do not contribute, the outcomes can bebiased towards the judgment of a few, or be manipulatedeasily. In the reverse manner of the tragedy of the commons[4] which depicts the situation whereby individuals consumethe common resource irresponsibly, motivating active partic-ipation in a CSS is a problem of public goods provisioningsuch that individuals face a cost that discourages them fromcontributing to the common protection.

    In this work, we look at several factors related to theincentives in a CSS. We adapt the normalized total-effortsecurity game in [5][6] to depict the scenario of collaborative

    security protection, but our model takes into account ofthe long term user consideration using the framework ofinfinitely repeated games. We first describe the basic modelin Section II and extend it with the expectation on social in-fluence in Section III. We study the effects of user dynamicsand a possible contagion effect of generosity in Section IV.We find that it is easier to encourage a moderate level of usercontribution in a random or scale-free community structurethan in the complete-graph structure of global systems inSection V.


    Imagine a community-based system that collates evalua-tion reports from its members on some trust or security as-pects of websites. Inputs from all members are important asthey serve to diversify the inputs and cancel out errors madeby individuals, in addition to enabling a level of scrutiny oneach others inputs. Without a sufficient level of contribution,a CSS will be deemed unreliable and abandoned by itsmembers. An equally undesired scenario arises when thereis a highly skewed participation ratio such that the systemcan be completely undermined when the few highly activeusers become corrupted or stop participating [3].

    A. An Infinitely Repeated Total-effort Security Game

    We use a n-player repeated game to model a CSSconsisting of N rational members. We first consider acomplete-graph community structure, as in global systemslike PhishTank and WOT, where the N = n memberscollaborates at all time (round) t for common protection.Each game round evaluates a different website (target).

    We assume that all members value the benefit of pro-tection b equally and have the same cost of contributionc. Assuming also that the inputs from all members areequally important, we formulate the homogenous utilityUi,t received by member i at game round t to be linearlydependent on the ratio of contribution by the n collaboratingmembers, following the notion of normalized total-effortsecurity in [5][6], as follows:

    Ui,t =

    j aj,t

    nb cai,t (1)

    with ai,t denoting the binary action of either {1: contribute,0: do not contribute} by member i at game round t.

    3rd International Workshop on Security and Social Networking

    978-1-61284-937-9/11/$26.00 2011 IEEE 270

  • When all members contribute, each of them receives autility of b c. If no one but only member i contributes,his utility is b/n c. Assume b > c > b/n such thatcontributing to a CSS is the case of n-person prisonersdilemma. We further assume an infinitely repeated gameto depict the expectation by individual members that thesystem will evaluate infinitely many websites and exist untilan unforeseen future. If the system is known to last only fora finite amount of time, not-contributing at all game roundswill be the (sub-game perfect) equilibrium strategy.

    We consider that individual members rank their infinitepayoff stream using the -discounted average criterion. Thediscount factor i characterizes how a player weighs hispayoff in the current round compared to future payoffs.In the context of this paper, it can be interpreted as howa member perceives the long term importance of commonprotection and his relationship with the n 1 interactingpeers. We assume that i is heterogeneous. A short-sightedmember has i 0, while a player who values the long-termbenefit of the CSS system or who cares about the long-termrelationship with other members, has i 1. Let 0 i < 1,the -discounted average payoff of member i is given by:

    (1 i)t=1

    (i)t1Ut (2)

    Analyzing the equilibrium behaviors of the n > 2 commu-nity members is more complicated that a 2-player repeatedgame. A trivial setting is when all members are assumed toemploy a n-player grim trigger strategy which considerseach member to be contributing initially, but threatens tostop contributing forever if he realizes that any of his n peershas not contributed in the previous round. Given this largestthreat of punishment, an equilibrium whereby all memberswill always contribute can be achieved, if i:

    i >cn bbn b


    A simple relationship between the cost c and benefit bcan now be established. If c is close to b, the requiredi approaches 1 as n increases. This reflects the real-lifescenario that user inputs are hard to obtain if the contributioncost is large relative to the benefit of protection.

    A challenge in a repeated n-player game is that one cannotidentify and punish those who have not contributed without acentralized monitoring mechanism, which can be expensiveto build or threaten the anonymity of contributors. A playercan only work out the contribution ratio of the others inthe previous round rt1 based on the payoff he receives.The n-player grim trigger strategy inefficiently punisheseveryone, making it to be an unrealistic strategy.

    III. THE EXPECTATION ON SOCIAL INFLUENCERather than using the grim trigger strategy, we adapt an

    idea from [7] such that the community members reason for

    their respective choice of action, not only depending on thepast but also on their expectation as to how their actions caninfluence the choice of others in the subsequent rounds. Weconstruct two simple influence rules, as follows:

    Linear influence. A player believes that his contributionwill have a positive influence on his peers and increasetheir contribution ratio by in the subsequent game rounds.Similarly, he expects that their contribution ratio will dropby if he does not contribute. The rule can be written as:

    r = min(max(rt1 + , 0), 1) (4)

    Sigmoid influence. Same as linear influence. However,the contribution ratio in the subsequent rounds is updatedfollowing a sigmoid curve. Specifically, a player reasons thathis action will have a reduced influence on others when thecurrent contribution level is close to the extremes, 0 or 1:

    r =1

    1 + ew(rt1+12 )


    With the above expectation, a member i will contributeat time t, if:


    1 i[Vc(r) Vn(r)] > 0 (6)

    with Vc and Vn denoting the utilities of contributing andnot-contributing respectively, based on the last observedcontribution level rt1 and the expected future contributionratio r. This assumes that a member believes that if heis to contribute now, since his action will cause a positiveinfluence on the others, he will be better off to contributealso in the future. The same reasoning applies for the case ofnot-contributing. These simple rules (4), (5) and (6) modelthe bounded rationality of users. Indeed, it is non-trivial toreason for the best-responses in an interconnected structure.Each member is not aware of the discounting factor ofothers. One also may not know about his peers peers andhow they perform in their respective games.

    A. Simulation Results

    We consider several levels of expectation on how muchan action can influence of the action of others, as shownin Table I. Note that depicts the expectation thatcontributing/not has a positive/negative influence on peerscontribution ratio. The appendix s denotes the use of sigmoidinfluence rule. Each expectation level is simulated 50 timesfor computing the mean payoff. In each simulation run,every member is assigned a new discounting factor drawnuniformly between a minimum value min and 1.

    Figure 1 shows the simulation results. With = 1.0,which is the equivalent of the grim trigger strategy, fullcontribution to give the maximum payoff of b c = 1 is astable outcome when all members have a discounting factorhigher than a moderate threshold 0.5. This is shown bythe dotdashed line in Figure 1. However, as aforementioned,the grim-trigger strategy is not realistic in practice.



    Var. Description Value(s)N Total community members 100c Contribution cost 1b Benefit of full protection 2w Steepness of the sigmoid function 10 Expected social influence on 0.25,0.33s,

    the contribution ratio of peers 0.50,1.0

    0.0 0.2 0.4 0.6 0.8 1.0dmin






    C g:0.25

    C g:0.33s

    C g:-0.5

    C g:-1.0

    Figure 1. Mean payoff in a Complete-graph (C) community structure

    With = 0.5, that is when the members expect thatnon-contribution will influence half of his peers to also stopcontributing, a fully cooperative equilibrium can only beachieved if all members place a large weight on the benefitof long term protection, as shown by the dotted line in Figure1. As the community members expect that their respectiveaction will have reduced influence on others motivation(e.g., = 0.33s,0.25), we find that the average payoff Uremains at zero even as min approaches 1. In other words,no community members contribute in equilibrium.

    The above has several implications. First, the results showthat, without the help of any incentive schemes, the levelof user contribution for a CSS can be expected to be verylow. Centralized mechanisms such as monitoring, reputationand micro-payment may help to encourage contribution,but there can be challenges (including cost and anonymityconcerns) in implementing them in practice. The results alsohighlight the role of education to cultivate a sense of socialresponsibility (i.e., to increase the users perception of )and to inform about the long-term importance and benefitof collaborative security (i.e., to increase the i of users).This is also challenging as it is well-known that ordinaryusers do not regard security as their primary concern [8].


    We investigate if a cooperative spirit can be cultivatedgiven the presence of a small fraction of members 0 whowould contribute to the common protection unconditionally.These nice users can be thought as those who are extremelygenerous in real life, or those who have been employed toensure a minimal level of contribution in the system.

    We also factor in a simple user dynamics in the CSS. Ineach game round, a fraction m 0 of under-performing


    Var. Description Value(s) Fraction of nice users per game round 0.04, 0.08m Fraction of user leaving and joining per round 0.01 Transient rounds before user dynamics 5

    0.0 0.2 0.4 0.6 0.8 1.0dmin






    C g:-0.5C g:0.33sC g:0.25

    Figure 2. Mean payoff in a Complete-graph (C) community structure,with a fraction of nice users = 0.04 and dynamics of m = 0.01.

    users (i.e. those with average payoffs 0 and who havebeen through a minimum number of transient rounds ) areprogrammed to leave. When a user leaves, we assume thatanother user joins and connects with the remaining members.This models the real life scenario where frustrated usersleave, while new users join the community continuously.

    A. Simulation Results

    Table II summarizes the variables and values used in oursimulation to study the effect of community dynamics andnice users. As before, each scenario is repeated with 50simulation runs for computing the mean value. Without userdynamics, as in Section II, we observe that the contributionratio and average payoff settle quickly after several gamerounds. With user dynamics, these values fluctuate, but onlyslightly as one member may leave (while another joins) pergame round. Considering this, in each simulation run, wemeasure the average payoff of all community members onlyafter t = 250 game rounds.

    Figure 2 plots the average payoff of the communitymembers when a small fraction of nice users ( = 0.04)and dynamics (m = 0.01) are considered. In every gameround, the worst performing member is programmed toleave, while 4 members are randomly selected to contributeunconditionally. As shown in the figure, the presence of 4nice members does help to increase the contribution level(average payoff) slightly from zero as seen in Figure 1,to about 0.15 in Figure 2. This shows the role played bygenerous members or those who have been employed, inencouraging the others to contribute.

    However, notice that other than the case with = 0.5,the average payoff is flat even as the minimum discountingfactor min approaches 1. This hints on the limited impactof the nice users in a global system (complete-graphcommunity structure) after all. It also highlights the risk


  • of over-reliance on a small group of nice users, causing ahighly disproportionate contribution ratio in the system. Ahighly skewed contribution pattern can harm the Byzantinef...


View more >