chord presentation at papers we love sf, august 2016

26
Chord: A scalable peer-to-peer lookup service for internet applications Tom Faulhaber [email protected] Papers We Love SF August 2016

Upload: tom-faulhaber

Post on 16-Apr-2017

77 views

Category:

Technology


0 download

TRANSCRIPT

Page 1: Chord Presentation at Papers We Love SF, August 2016

Chord: A scalable peer-to-peer lookup service for internet applications

Tom [email protected]

Papers We Love SFAugust 2016

Page 2: Chord Presentation at Papers We Love SF, August 2016
Page 3: Chord Presentation at Papers We Love SF, August 2016

Chord is a completely peer-to-peer distributed key management system that works under dynamic membership churn.

Page 4: Chord Presentation at Papers We Love SF, August 2016

Context

Page 5: Chord Presentation at Papers We Love SF, August 2016
Page 6: Chord Presentation at Papers We Love SF, August 2016
Page 7: Chord Presentation at Papers We Love SF, August 2016

Idea 1: Consistent Hashing

Page 8: Chord Presentation at Papers We Love SF, August 2016

Consistent Hashing

• Map keys to a hash m-bits long, e.g. SHA-1.

• Construct a ring with operations performedmod 2

m

Page 9: Chord Presentation at Papers We Love SF, August 2016

Consistent Hashing

• Map keys to a hash m-bits long, e.g. SHA-1.

• Construct a ring with operations performed

• For example, take

• Gives us separate addresses.

m = 3

23 = 8

mod 2

m

Page 10: Chord Presentation at Papers We Love SF, August 2016

Nodes and Keys

• Each node in the network has an address, typically

• We define , the successor of , defined as

• Key is stored at node

• Each node knows

• gives lookup performance, where is the number of nodes

addr = hash(ip)

succ(k)

min(n) | n � k mod 2

m

succ(k) O(N)N

k succ(k)

k

nn0

= succ(n+ 1 mod 2

m)

Page 11: Chord Presentation at Papers We Love SF, August 2016

Idea 2: Finger Tables

Page 12: Chord Presentation at Papers We Love SF, August 2016

Finger tables

• To move from to Chord uses a “finger table” to track nodes around the ring.

• Fundamental insight- dense information nearby, - sparse information far away

• Table defined by:

• Also track

O(n) O(log n)

finger[k].start = (n+ 2

k�1) mod 2

m, 1 k m

.interval = [finger[k].start, finger[k + 1].start)

.node = succ(finger[k].start)

successor = finger[1].node

predecessor

Page 13: Chord Presentation at Papers We Love SF, August 2016

Example Layoutm = 62m = 64

Node Locationα 7β 16γ 42δ 44ε 50ζ 52η 3θ 4

This table does not exist!

Page 14: Chord Presentation at Papers We Love SF, August 2016

The View from α

k start end n1 8 8 β2 9 10 β3 11 14 β4 15 22 β5 23 38 γ6 39 7 γ

Starting from α, retrieve key 51

First step, ask γ

Page 15: Chord Presentation at Papers We Love SF, August 2016

The View from γ

k start end n1 43 43 δ2 44 45 δ3 46 49 ε4 50 57 ε5 58 9 η6 10 42 β

Second step, ask ε

Page 16: Chord Presentation at Papers We Love SF, August 2016

The View from ε

k start end n1 51 51 ζ2 52 53 ζ3 54 57 η4 58 1 η5 2 17 η6 18 50 γ

Third step, ask ζ

At this point, so ζ will have the key

succ(51) = ⇣

Page 17: Chord Presentation at Papers We Love SF, August 2016

Idea 3: Handling Churn

Page 18: Chord Presentation at Papers We Love SF, August 2016

Joining the network

Once a node has assigned itself an id, , it does 3 things:

1. Builds its finger table and predecessor

n0k start n123

4

5

… … …

n0 + 1 succ(n0 + 1)

n0 + 2

n0 + 4

n0 + 8

n0 + 16 succ(n0 + 16)

succ(n0 + 2)

succ(n0 + 4)

succ(n0 + 8)

Page 19: Chord Presentation at Papers We Love SF, August 2016

Joining the network

Once a node has assigned itself an id, , it does 3 things:

1. Builds its finger table and predecessor

2. Updates other nodes that should have their finger tables point to

3. Notify upper layers of software that they need to move keys.

n0

n0

Page 20: Chord Presentation at Papers We Love SF, August 2016

Joining the network

Once a node has assigned itself an id, , it does 3 things:

1. Builds its finger table and predecessor

2. Updates other nodes that should have their finger tables point to

3. Notify upper layers of software that they need to move keys.

n0

n0

Joins take messages

keys will be moved

O(log2n)

O(1

N)

Page 21: Chord Presentation at Papers We Love SF, August 2016

Concurrency & Failure

Two basic mechanisms:

1. Every node periodically performs stabilization

2. Each node maintains a successor list rather than a single successor

When a node fails, it’s keys are lost. Other mechanisms are used by higher levels to build resiliency, e.g. republishing or replication.

Page 22: Chord Presentation at Papers We Love SF, August 2016

Related Work

Page 23: Chord Presentation at Papers We Love SF, August 2016

Related Work

• Pastry

• CAN

• Kademlia

• Tapestry

Page 24: Chord Presentation at Papers We Love SF, August 2016

Impact

Page 25: Chord Presentation at Papers We Love SF, August 2016

Impact

• Research applications in domains such as distributed file systems, pub-sub, document sharing, search algorithms.

• Basis for sharing data to nodes in systems like Cassandra without requiring a global index.

Page 26: Chord Presentation at Papers We Love SF, August 2016

The End!