collaboration of untrusting peers with changing interests baruch awerbuch, boaz patt-shamir, david...

33
Collaboration of Untrusting Peers with Changing Interests Baruch Awerbuch, Boaz Patt-Shamir, David Peleg, Mark Tuttle Review by Pinak Pujari

Upload: mary-hawkins

Post on 05-Jan-2016

217 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Collaboration of Untrusting Peers with Changing Interests Baruch Awerbuch, Boaz Patt-Shamir, David Peleg, Mark Tuttle Review by Pinak Pujari

Collaboration of Untrusting Peers with Changing Interests

Baruch Awerbuch, Boaz Patt-Shamir, David Peleg, Mark Tuttle

Review by Pinak Pujari

Page 2: Collaboration of Untrusting Peers with Changing Interests Baruch Awerbuch, Boaz Patt-Shamir, David Peleg, Mark Tuttle Review by Pinak Pujari

Introduction Reputation systems are an integral part of

e-commerce application systems.

Engines like eBay depend on reputation systems to improve customer confidence.

More importantly, they limit the economic damage done by disreputable peers.

Page 3: Collaboration of Untrusting Peers with Changing Interests Baruch Awerbuch, Boaz Patt-Shamir, David Peleg, Mark Tuttle Review by Pinak Pujari

Introduction: eBay example For Instance in eBay, after every

transaction, the system invites each party to post its rating of the transaction on a public billboard the system maintains.

Consulting the billboard is a key step before making a transaction.

Page 4: Collaboration of Untrusting Peers with Changing Interests Baruch Awerbuch, Boaz Patt-Shamir, David Peleg, Mark Tuttle Review by Pinak Pujari

Introduction: Possibility of fraud? Scene #1: A group of sellers engaging in phony

transactions, and rating these transactions highly to generate an appearance of reliability while ripping off other people.

Scene #2: A single seller behaving in responsible manner long enough to entice an unsuspecting buyer into a single large transaction, and then vanishing.

Reputation systems are valuable, but not infallible.

Page 5: Collaboration of Untrusting Peers with Changing Interests Baruch Awerbuch, Boaz Patt-Shamir, David Peleg, Mark Tuttle Review by Pinak Pujari

Model of the Reputation System n players. (Some honest, some dishonest) m objects. (Some good, some bad) Player probes an object to learn if it good

or bad. The cost of the probe is 1 if the object is

bad and 0 if the object is good.

Goal: Find a good object incurring minimal cost.

Page 6: Collaboration of Untrusting Peers with Changing Interests Baruch Awerbuch, Boaz Patt-Shamir, David Peleg, Mark Tuttle Review by Pinak Pujari

Model of the Reputation System Players collaborate by posting the results

of their probes on a public billboard.

And, consulting the board when choosing an object to probe.

Assume that entries are write-once, and that billboard is reliable.

Page 7: Collaboration of Untrusting Peers with Changing Interests Baruch Awerbuch, Boaz Patt-Shamir, David Peleg, Mark Tuttle Review by Pinak Pujari

So what is the problem?Problem Definition: Some of the players are dishonest, and can

behave in an arbitrary fashion, including colluding and posting false reports on the billboard to entice honest players to probe bad objects.

Page 8: Collaboration of Untrusting Peers with Changing Interests Baruch Awerbuch, Boaz Patt-Shamir, David Peleg, Mark Tuttle Review by Pinak Pujari

Model of the Reputation System (contd.)The execution of the system is as follows- A player reads the billboard, optionally probes an

object, and writes to the billboard. (Some randomized protocol is used that chooses the object to probe based on the contents of the billboard)

Honest players are required to follow the protocol.

But dishonest players are allowed to behave in an arbitrary (or Byzantine) fashion, including posting incorrect information on the billboard.

Page 9: Collaboration of Untrusting Peers with Changing Interests Baruch Awerbuch, Boaz Patt-Shamir, David Peleg, Mark Tuttle Review by Pinak Pujari

Strategy1. Exploration rule: A player should choose an

object at random (uniformly) and probe it. This might be a good idea if there are a lot of good

objects, or if there are a lot of dishonest players posting inaccurate reports to the billboard.

2. Exploitation rule: A player should choose another player at random and probe whichever object it recommends (if any), thereby exploiting or benefiting from the effort the other player. This might be a good idea most of the players posting

recommendations to the billboard are honest.

Page 10: Collaboration of Untrusting Peers with Changing Interests Baruch Awerbuch, Boaz Patt-Shamir, David Peleg, Mark Tuttle Review by Pinak Pujari

The Balanced Rule In most cases, the player will not know

how many honest players or good objects are in the system. So best option would be to balance between the two approaches.

Flip a coin. If the result is “heads”, follow Exploration rule. If the result is “tails”, follow Exploitation rule.

Page 11: Collaboration of Untrusting Peers with Changing Interests Baruch Awerbuch, Boaz Patt-Shamir, David Peleg, Mark Tuttle Review by Pinak Pujari

Models with Restricted Access to players Dynamic object model : Objects can

enter and leave the system over time.

Partial access model : Each player has access to a different subset of the objects.

Page 12: Collaboration of Untrusting Peers with Changing Interests Baruch Awerbuch, Boaz Patt-Shamir, David Peleg, Mark Tuttle Review by Pinak Pujari

Model of the Reputation System (contd.) The execution of an algorithm is uniquely

determined by the algorithm, the coins flipped by the players while executing the protocol, and by three external entities

Three external entities: The player schedule that determines the order

in which players take steps. The dishonest players. The adversary that determines the behavior of

the dishonest players.

Page 13: Collaboration of Untrusting Peers with Changing Interests Baruch Awerbuch, Boaz Patt-Shamir, David Peleg, Mark Tuttle Review by Pinak Pujari

Model of the Reputation System (contd.) What is the adversary? The adversary is a function from a

sequence of coin flips to a sequence of objects for each dishonest player to probe and the results for the player to post on the billboard.

Adversary is quite powerful, and may behave in an adaptive, Byzantine fashion.

Page 14: Collaboration of Untrusting Peers with Changing Interests Baruch Awerbuch, Boaz Patt-Shamir, David Peleg, Mark Tuttle Review by Pinak Pujari

Model of the Reputation System (contd.) What is an operating environment? An operating environment is a triple

consisting of a player schedule, a set of dishonest players, and an adversary.

The purpose of the operating environment is to factor out all of the nondeterministic choices made during an execution, leaving only the probabilistic choices to consider.

Page 15: Collaboration of Untrusting Peers with Changing Interests Baruch Awerbuch, Boaz Patt-Shamir, David Peleg, Mark Tuttle Review by Pinak Pujari

Models with Restricted Access to players Dynamic object model : Objects can

enter and leave the system over time.

Partial access model : Each player has access to a different subset of the objects.

Page 16: Collaboration of Untrusting Peers with Changing Interests Baruch Awerbuch, Boaz Patt-Shamir, David Peleg, Mark Tuttle Review by Pinak Pujari

The Dynamic Object Model Operating Environment:1. The player schedule.2. The dishonest players.3. The adversary.4. The object schedule that determines when

objects enter and leave the system, and their values.

m - upper bound on the number of objects concurrently present in the system.

β - lower bound on the fraction of good objects at any time, for some 0 ≤ β ≤ 1.

Page 17: Collaboration of Untrusting Peers with Changing Interests Baruch Awerbuch, Boaz Patt-Shamir, David Peleg, Mark Tuttle Review by Pinak Pujari

The Dynamic Object Model: Algorithm The algorithm is an immediate application

of the Balanced rule.

Algorithm DynAlg: If the player has found a good object, then probe it again. If not, then apply the Balanced rule.

Page 18: Collaboration of Untrusting Peers with Changing Interests Baruch Awerbuch, Boaz Patt-Shamir, David Peleg, Mark Tuttle Review by Pinak Pujari

Analysis of Algorithm DynAlg Given a probe sequence σ, switches(σ)

denotes the number of distinct objects in σ.

Given an operating environment E, let σE(DynAlg) be the random variable whose value is the probe sequence of the honest players generated by DynAlg under E.

σ∗ - the cost of an optimal probe sequence.

Page 19: Collaboration of Untrusting Peers with Changing Interests Baruch Awerbuch, Boaz Patt-Shamir, David Peleg, Mark Tuttle Review by Pinak Pujari

Analysis of Algorithm DynAlgTheorem: For every operating environment

E and every probe sequence σ∗ for the honest players, the expected cost of σE(DynAlg) is at mostcost(σ∗) + switches(σ∗)·(2−β)(m + n ln n))

Page 20: Collaboration of Untrusting Peers with Changing Interests Baruch Awerbuch, Boaz Patt-Shamir, David Peleg, Mark Tuttle Review by Pinak Pujari

Proof:Partition the sequence σ∗ into subsequences σ∗ = σ1*σ2

∗ · · · σ K∗ such that for all 1≤i<K,

-> all probes in σi∗ are to the same object.

-> σi∗ and σi+1

∗ probe different objects.

Similarly, Partition the sequence σ into subsequences σ = σ1σ2 · · · σ K

such that,

|σi∗| = |σi| for all 1 ≤ i ≤ K.

Page 21: Collaboration of Untrusting Peers with Changing Interests Baruch Awerbuch, Boaz Patt-Shamir, David Peleg, Mark Tuttle Review by Pinak Pujari

Proof:Consider the difference cost(σi

∗) − cost(σi).

If the probes in σi∗ are to a bad object,

then trivially cost(σi) ≤ cost(σi∗).

To finish the proof, we show that If all probes in σi

∗ are to a good object,

then cost(σi) ≤ (2 − β).(m + n ln n).

Page 22: Collaboration of Untrusting Peers with Changing Interests Baruch Awerbuch, Boaz Patt-Shamir, David Peleg, Mark Tuttle Review by Pinak Pujari

Proof:An object i-persistent if it is good and ifvit is

present in the system throughout the duration of σi

∗.A probe i-persistent if it probes an i-

persistent object. Partition the sequence σi into n

subsequences σi = Di0Di

1Di2· · ·Di

n, where Di

k consists of all probes in σi that are preceded by i-persistent probes of exactly k distinct honest players.

Page 23: Collaboration of Untrusting Peers with Changing Interests Baruch Awerbuch, Boaz Patt-Shamir, David Peleg, Mark Tuttle Review by Pinak Pujari

Proof:Obviously, cost (σi) = Σn

k=0 cost(Dik).

The expected cost of a single fresh probe in Dik is at

most 1−β/2.Each fresh probe in Di

k finds a persistent object with some probability pk.

The probability that Dik contains exactly ℓ fresh

probes is (1 − pk)ℓ−1pk.

Therefore, the expected cost of Dik is at most

Page 24: Collaboration of Untrusting Peers with Changing Interests Baruch Awerbuch, Boaz Patt-Shamir, David Peleg, Mark Tuttle Review by Pinak Pujari

Proof: For k = 0, p0 ≥ 1/2m.

For k > 0, pk ≥ k/2n.

So, expected cost of σi is at most:

Page 25: Collaboration of Untrusting Peers with Changing Interests Baruch Awerbuch, Boaz Patt-Shamir, David Peleg, Mark Tuttle Review by Pinak Pujari

The Partial Access Model Here, each player is able to access only a subset

of the objects. The main problem with this model is that in

contrast to the full access model (where each player can access any object), when we have partial access, it is difficult to measure the amount of collaboration a player can expect from other players in searching for a good object.

To overcome this difficulty is to concentrate on the amount of collective work done by subsets of players.

Page 26: Collaboration of Untrusting Peers with Changing Interests Baruch Awerbuch, Boaz Patt-Shamir, David Peleg, Mark Tuttle Review by Pinak Pujari

The Partial Access Model Notation: Model the partial access to the objects with a bipartite

graph G = (P,O,E) P is the set of players O is the set of objects A player j can access an object i only if (j, i) belongs to E.

For each player j, let obj(j) denote the set of objects accessible to j, and let deg(j) = |obj(j)|.

For each honest player j, let best(j) denote the set of good objects accessible to j.

Let N(j) be the set of all players (honest and dishonest) that are at distance 2 from a given player j, i.e.,

Page 27: Collaboration of Untrusting Peers with Changing Interests Baruch Awerbuch, Boaz Patt-Shamir, David Peleg, Mark Tuttle Review by Pinak Pujari

The Partial Access Model: Algorithm Algorithm is same as DynAlg from the dynamic

model, except that the Balanced rule is adapted to the restricted access model.

In the new rule, a player j flips a coin. If the result is “heads,” it probes an object selected uniformly at random from obj(j). [Exploration Rule]

If the result is “tails,” it selects a player k uniformly at random from N(j) and probes the object k recommends, if any; and otherwise it probes an object selected uniformly at random from obj(j). [Exploitation Rule]

Page 28: Collaboration of Untrusting Peers with Changing Interests Baruch Awerbuch, Boaz Patt-Shamir, David Peleg, Mark Tuttle Review by Pinak Pujari

The Partial Access ModelTheorem:Let Y be any set of honest players. Denote

Let If X(Y ) in nonempty, then the total work of players

in Y is at most

Page 29: Collaboration of Untrusting Peers with Changing Interests Baruch Awerbuch, Boaz Patt-Shamir, David Peleg, Mark Tuttle Review by Pinak Pujari

Interpretation Consider any set Y of players with common interest X(Y)

(meaning any object in X(Y) would satisfy any player in Y ). From the point of view of a player, its load is divided among

the members of Y : the total work done by the group working together is roughly the same as the work of an individual working alone.

The first term in the bound is just an upper bound on expected amount of work until a player finds an object in X(Y).

The second term is an upper bound on the total number of recommendations (times a logarithmic factor) a player has to go through.

This is pleasing, because it indicates that the number of probes is nearly the best one can hope for.

Page 30: Collaboration of Untrusting Peers with Changing Interests Baruch Awerbuch, Boaz Patt-Shamir, David Peleg, Mark Tuttle Review by Pinak Pujari

Collaboration across groups without common interest Consider sets of players who do not share a common

interest. Of course, one can partition them into subsets SIGs (special interest groups), where for each SIG there is at least one object that will satisfy all its members.

The Theorem guarantees that each SIG is nearly optimal. In the sense that the total work done by a SIG is not much

more than the total work that must be done even if SIG members had perfect coordination (thus disregarding dishonest players).

However, the collection of SIGs may be suboptimal, due to overlaps in the neighborhood sets (which contribute to the second term of the upper bound).

Page 31: Collaboration of Untrusting Peers with Changing Interests Baruch Awerbuch, Boaz Patt-Shamir, David Peleg, Mark Tuttle Review by Pinak Pujari

Collaboration across groups without common interest Does there always exists a “good” partition of

players into SIGs, so that the overall work (summed over all SIGs) is close to optimal?

The answer is negative in the general case.

Even if each good object would satisfy many honest players, the total amount of work, over all players, is close to the worst case (being the sum of work necessary if each player is working alone).

Page 32: Collaboration of Untrusting Peers with Changing Interests Baruch Awerbuch, Boaz Patt-Shamir, David Peleg, Mark Tuttle Review by Pinak Pujari

SimulationThe graph suggests that the algorithm works fairlywell for values of p = 0.1through p = 0.7. It suggests that a little sampling is necessary, anda few recommendations can really help a lot

Page 33: Collaboration of Untrusting Peers with Changing Interests Baruch Awerbuch, Boaz Patt-Shamir, David Peleg, Mark Tuttle Review by Pinak Pujari

Conclusion This paper shows that, in spite of asynchronous

behavior, different interests, changes in time, and Byzantine behavior of unknown subset of peers, the honest peers miraculously succeed in collaborating, in the sense that the honest peers relatively rarely repeat mistakes of other honest peers. One interesting feature of our method is that we mostly avoid the issue of discovering who the faulty peers are.

Future Extensions?1. How can it be gained by trying to discover the

faulty peers.2. Another open question is tightening the bounds

for the partial access case.