[IEEE 2011 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops) - Seattle, WA, USA (2011.03.21-2011.03.25)] 2011 IEEE International Conference on Pervasive Computing and Communications Workshops (PERCOM Workshops) - Incorporating evidence into trust propagation models using Markov Random Fields

Download [IEEE 2011 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops) - Seattle, WA, USA (2011.03.21-2011.03.25)] 2011 IEEE International Conference on Pervasive Computing and Communications Workshops (PERCOM Workshops) - Incorporating evidence into trust propagation models using Markov Random Fields

Post on 13-Apr-2017




0 download


  • Incorporating Evidence into Trust Propagation Models Using Markov RandomFields

    Hasari TosunDepartment of Computer Science

    Montana State UniversityEPS 357, PO Box 173880Bozeman, MT 59717-3880


    John W. SheppardDepartment of Computer Science

    Montana State UniversityEPS 357, PO Box 173880Bozeman, MT 59717-3880


    AbstractCurrent trust models for social networks com-monly rely on explicit voting mechanisms where individualsvote for each other as a form of trust statement. However,there is a wealth of information about individuals beyond trustvoting in emerging web based social networks. Incorporatingsources of evidence into trust models for social networks hasnot been studied to date. We explore a trust model for socialnetworks based on Markov Random Fields, which we callMRFTrust, that allows us to incorporate sources of evidence.To allow comparative evaluation, a state-of-the-art local trustalgorithmMoleTrustis also investigated. Experimental re-sults of the algorithms reveal that our trust algorithm thatincorporates evidence performs better in terms of coverage.It is competitive with the MoleTrust algorithm in predictionaccuracy and superior when focusing on controversial users.

    Keywords-Trust Metrics, Reputation System, Social Network,Markov Random Fields


    Recently, online Web services such as MySpace, Face-Book, Friendster, LiveJournal, Blogger, LinkedIn, Twitter,and Orkut have emerged as popular social networks. Thisnew generation of social networks is enormous, rich in infor-mation, and extremely dynamic. Moreover, in todays Web, avast amount of content is created by users. This content canrange from factual information to opinions about a person, aproduct, or a company. People constantly interact with otherpeople about whom they have no immediate information. Asa result, users of these services are constantly faced withquestions of how much they should trust the content createdor opinion provided by another person and how much theyshould trust the unknown person with whom he or she isabout to interact.

    With this uncertainty in the mind, many e-commercecompanies such as eBay and Amazon enable users to rateother users or their reviews by providing a trust vote.Most online forums have some mechanism for users to rateothers opinions or responses. In some cases, the voting isimplicit. For example, reading an article can be consideredan implicit positive vote. Utilizing this vast amount of trustdata or aggregating trust scores for users have become a real

    challenge for those companies. Trust and reputation is alsovery relevant to Peer-to-Peer (P2P) networks such as file-sharing networks. P2P networks are mainly used for sharingand distributing information. Thus, they are vulnerable to thespread of unauthentic files [1], [2], [3], [4]. An alternativeutilization of the trust concept is used by the Google searchengine; a link from one web site to another is an expressionof trust [5].

    As the Semantic Web gains acceptance, understanding thecredibility of metadata about authors is becoming important[6]. While designing recommender systems, one researcherfound that there is a strong correlation between trust anduser similarity [7]. Thus, trust became the essential variablein computing user similarity [8]. Finally, trust concept isextensively applied to social networks. There is a wealthof information on trust and reputation scoring in social net-works [9], [10], [8], [6], [11]. For example, Mui documentedthe theories and approaches about trust scores and reputationsystems using Bayesian networks for inference on socialnetworks [12].

    There is no universal definition of trust and reputation.Barbalet characterized trust and its consequences in detail.He postulates that it is insufficient to define trust in termsof confident expectation regarding anothers behavior asmany researchers defined [13]. Instead, the author char-acterizes trust in terms of acceptance of dependency (thetrust giver grants control or power to trustee; thus, thetrust giver accepts dependence on trustee) in the absense ofinformation about the others reliability in order to createan outcome otherwise unavailable. Golbeck and Hendleradopted a narrower definition of trust for social networkstrust in a person is a commitment to an action based onbelief that the future actions of that person will lead to agood outcome [11].

    Although, researchers generally dont aggree on the def-inition of trust, two properties of trust are used for ag-gregation: transitivity and asymmetry. Transitivity meansif A trusts B who trusts C, then, A trusts C. Asymmetrymeans if A trusts B, it doesnt mean that B will also

    3rd International Workshop on Security and Social Networking

    978-1-61284-937-9/11/$26.00 2011 IEEE 263

  • trust A. The majority of trust propagation algorithms utilizethe transitivity property [10], [1], [14], [6], [2]. It shouldbe noted this property may not always work with distrust[15]. Moreover, [16] and [17] defined two types of trusts:referreral trust and direct functional trust. If A trusts Bwho trusts C, then, the trust between A-B and B-C isdirect functional trust. However, if B recomends C to Aas trustworthy, it is referral trust.

    Modeling trust networks and propagating trust is a chal-lenging task: 1) trust networks are huge and sparse, and2) it is often difficult to model human belief and trust.Thus, researchers have often proposed simplistic approachesfor trust propagation. Ziegler and Lausen categorized trustmetrics on three dimensions [14]: a network perspective,computation locus, and link evaluation. For the networkperspective, they categorized trust metrics as global andlocal. Global trust metrics consider all links and nodes inthe networks, where local trust metrics take into accountonly a partial network. Computational locus refers to theplace where trust relationships are computed. In determiningdistributed or local trust metrics, the computation load isdistributed to every node in the network, whereas in com-putationally centralized systems, all metric computations areperformed on a single machine. The tradeoff between thetwo approaches involves memory, performance, and security.Finally, link evaluation determines whether the trust metricsthemselves are scalar or group trust metrics. In group trustmetrics, trust scores are computed for a set of individualswhereas for scalar metrics only trust scores between twoindividuals are computed.

    An increasing number of articles have been published onmodeling trust networks and evaluating trust metrics [4],[15], [14], [11], [1], [18], [16], [13], [3] using differentcomputational methods. For example, Wang and Vassilevadesigned a Bayesian Network-based trust model for P2P[3]. The model represents different features of trust asleaf nodes of Naive Bayes networks. On the other hand,[18] developed a model based on Fuzzy logic. Anotherpopular trust model is the Appleseed Trust metric based on aSpreading Activation Model [14], [6]. Two different modelsbased on eigenvalue propagation were designed by [1] and[15].

    Massa and Avesani studied challenges of computing trustmetrics in a social network where data are sparse [19].In such networks, neither a global reputation nor a simpletrust score is a viable option since a large percentage ofthe participants are considered to be controversial; theyare distrusted by some and trusted by others. Thus, theauthors proposed a state-of-the-art framework and algorithmcalled MoleTrust that uses local trust metrics. However, thisapproach does not incorporate other sources of evidence. Forexample, the epinion.com dataset contains articles writtenby users [20] where these articles are also rated by otherusers. Determining if such information is useful for a trust

    algorithm is the challenge. We hypothesize that includingmore evidence into a trust model will improve the predictionpower of the model and its coverage.

    The rest of the paper is organized as follows. Our trustprediction algorithm based on Markov Random Fields isdescribed in Section II. Then in Section III, we describethe dataset and present our results. The last section givesthe conclusions and future directions for our research.

    II. METHODS AND PROCEDURESIn this paper, we describe our approach to developing

    and using a trust network model based on Markov RandomFields (MRFs). A detailed introduction to MRFs is givenin [21]. An MRF is a stochastic process that exhibits theMarkov property in terms of the interaction of neighboringnodes in the network. MRF models have a wide range ofapplication domains. The nodes in the MRF graph representrandom variables, and the edges represent the dependenciesbetween variables. In our approach, we use the same typeof model for propagating the trust scores in social networks.

    The joint probability distribution over X and Y can berepresented by an MRF in the following way:

    P(x,y) =1



    (xi, xj)i

    (xi, yi)

    where Z is a normalization factor (also called the partitionfunction), (xi, xj) represents pairwise influence betweennode xi and xj in the network (often referred to as thepairwise compatibility matrix), and (xi, yi) is a localevidence function that forms a distribution over possiblestates, xi, given only its observations yi. When consideringthe application of MRFs to social network trust prediction,we note that the social network results in a trust networkwhenever users rate each other. Based on this observation,we developed a local algorithm for learning trust metrics byaugmenting an MRF representation of social networks withadditional sources of evidence. Our framework allows us toevaluate an active users trust for an unknown person in thenetwork.

    There are two common methods for inference with MRFmodels [22]: 1) Markov Chain Monte Carlo (MCMC) sam-pling, such as Gibbs sampling, and 2) Belief Propagation.The approach we used is based on belief propagation, sowe begin by describing the main steps in the Belief Propa-gation algorithm. Essentially, belief propagation proceeds asfollows:

    1) Select random neighboring nodes xk, xj2) Send message Mkj from xk to xj3) Update the belief about the marginal distribution at

    node xj4) Go to step one until convergence.

    Message passing in step 2 is carried out as

    Mkj =xk

    kj(xk, xj) b(xk) (1)


  • where b(xk) is the current belief value associated with nodexk. Belief updating in step 3 is then computed as

    b(xj) = (xj , yj)


    Mkj (2)

    where is a normalization factor, and Neighbor(j) is theset of nodes adjacent to node xj .

    To infer the trust score of users unknown to the currentuser in that network, a local network is generated from theglobal social network. The local trust network has a limitedhorizon. In other words, instead of propagating trust state-ments using the global network, we create a local networkbased on a specific users neighborhood. For example, for agiven user A, we generate a local network that contains allneighboring users that are only a finite distance (in terms ofthe number of links crossed) away from A. Thus, the trustscore of a person in the local network can be evaluated withrespect to the active user A.

    We compared the results of our approach to modelingtrust propagation to the MoleTrust algorithm, presented in[19]. The process of generating the local network is similarto MoleTrust in that it is based on the intuition that theaverage trust path length between two individuals is small[14]. Moreover, due to computational complexity and theobjective that any trust prediction system operate online, thelocal network needs to be small.

    For purposes of our experiments, we re-implemented Mo-leTrust to ensure a fair and carefully-controlled comparison.To predict how much a user A trusts a user B, denotedT (A,B), MoleTrust generates a local directed graph from agiven global social network whose root is A. For each graphdepth, it adds links that represent trust statements betweenusers. To avoid cycles, it does not add nodes if they arealready in the local network. The depth or distance of thegraph is determined by a parameter called the horizon. If thetarget user is in the local graph, a trust prediction is made.Otherwise, the trust prediction is not made. As we will see,this restriction has a direct impact on the coverage of theMoleTrust algorithm. Trust propagates from the root nodeto the leaf nodes with equation 3, where b(xj) is the trustvalue or belief predicted at node xj :

    b(xj) =


    b(xk)T (xk, xj)kPredecessor(j)


    Here T (xk, xj) is the trust value on the edge between nodexk and xj , and Predecessor(j) is the set of nodes withedges terminating at xj . The trust values for the nodes arecalculated from this equation, whereas the trust values onthe edges are specified explicitly in the network. Edge trustvalues represent explicit trust voting of one user about theother. To start the belief propagation process, the belief ofthe root node is initialized to 1.0.

    MoleTrust was designed to address the issue of predictingtrust in the presences of controversial users. The mostcontroversial users have approximately equal numbers ofdistrust and trust statements. The controversiality level ofa user is given in equation 4 [19]:

    c(xj) =|Trust(xj)| |Distrust(xj)||Trust(xj)|+ |Distrust(xj)|


    where Trust(xj) is the set of trust statements for user/nodexj , and Distrust(xj) is the set of distrust statements forxj . This controversiality level has the range of 1.0 . . . 1.0.A user with controversiality level of 1.0 is distrusted byall his or her judgers, where a user with controversialitylevel of 1.0 is trusted by all users who voted. On theother hand, a user with controversiality 0.0 has an equalnumber of trust and distrust votes. Therefore, a user with 0.0controversiality is the most controversial user. We discretizedthe controversiality levels of the users into buckets of width0.1. Finally, we define the coverage of a prediction algorithmto be the percentage of statements that are predictable by thatalgorithm.

    The MoleTrust algorithm accepts an incoming link to nodexj if the predicted trust value of xjs corresponding parentnode is above a threshold. Otherwise, the link between themis blocked from propagating the trust scores. We claim thisapproach biases the performance of the MoleTrust algorithmby limiting its predictions to those that are easy. Althoughthis approach may result in accurate trust predictions, itresults in low coverage. For a given graph depth or horizon,if the user is not in the local network, MoleTrust cannot makea prediction. Moreover, if trust propagation does not reachthe target user due to the link blocking explained above,the prediction cannot be made. For example, if we were topredict Alices trust in Mark based on the network in Figure1 without a direct link between them, the trust will propagateindirectly through nodes Bob and Dave. However, if neitherBob nor Dave have direct links to Mark, a prediction cannotbe made by MoleTrust. Th...


View more >