peer-to-peer multimedia streaming using bittorrent purvi shah, jehan-françois pâris university of...

33
Peer-To-Peer Multimedia Streaming Using BitTorrent Purvi Shah, Jehan-François Pâris University of Houston Houston, TX

Upload: sharyl-fisher

Post on 24-Dec-2015

220 views

Category:

Documents


0 download

TRANSCRIPT

Peer-To-Peer Multimedia Streaming Using BitTorrent

Purvi Shah, Jehan-François PârisUniversity of Houston

Houston, TX

Problem Definition

• Objectives– Customer satisfaction:

Minimize customer waiting time – Cost effectiveness:

Reduce operational costs (mostly hardware costs)

One Server Transferring videos is a

resource intensive task!

ManyCustomers

Transferring Videos• Video download:

– Just like any other file– Simplest case: file downloaded using

conventional protocol– Playback does not overlap with the transfer

• Video streaming from a server: – Playback of video starts while video is

downloaded– No need to wait until download is completed– New challenge: ensuring on-time delivery of

data• Otherwise the client cannot keep playing the video

Why use P2P Architecture?• Infrastructure-based approach

(e.g. Akamai)– Most commonly used– Client-server architecture– Expensive: Huge server farms– Best effort delivery– Client upload capacity completely unutilized– Not suitable for flash crowds

Why use P2P Architecture?• IP Multicast

– Highly efficient bandwidth usage– Several drawbacks so far

• Infrastructure level changes make most administrators reluctant to provide it

• Security flaws• No effective & widely accepted transport protocol

on IP multicast layer

P2P Architecture • Leverage power of P2P networks

– Multiple solutions are possible

• Tree based structured overlay networks– Leaf clients’ bandwidth unutilized– Less reliable– Complex overlay construction– Content bottlenecks– Fairness issues

Our Solution• Mesh based unstructured overlay

– Based on widely-used BitTorrent content distribution protocol

– A P2P protocol started ~ 2002– Linux distributors such as Lindows offer

software updates via BT– Blizzard uses BT to distribute game patches

– Start to distribute films through BT this year

BitTorrent (I)

BitTorrent (II)• Has a central tracker

– Keeps information on peers– Responds to requests for that information– Service subscription

• Built-in incentives: Rechoking– Give preference to cooperative peers: Tit-

for-tat exchange of content chunks– Random search: Optimistic un-choke

• When all chunks are downloaded, peers can reconstruct the whole file– Not tailored to streaming applications

Evaluation Methodology• Simulation-based

– Answers depend on many parameters– Hard to control in measurements or to

model

• Java based discrete-event simulator– Models queuing delay and transmission

delay– Remains faithful to BT specifications

BT Limitations• BT does not account for the

real-time needs of streaming applications– Chunk selection

• Peers do not download chunks in sequence

– Neighbor selection• Incentive mechanism makes too many peers to

wait for too long before joining the swarm

Chunk Selection Policy• Replace BT rarest first policy by

a sliding window policy– Forward moving window is equal to viewing delay

missed chunk

playback delay

playback start

Download window

chunk not yet received

received chunk

Two Options• Sequential policy

– Peers download first the chunks at the beginning of the window

– Limit the opportunity to exchange chunks between the peers

• Rarest-first policy– Peers download first the chunks within the

window that are least replicated among its neighbors

– Feasibility of swarming by diversifying available chunks among peers

Best

Worst

Discussion

• Switching to a sliding window policy greatly increases quality of service– Must use a rarest first inside window policy– Change does not suffice to achieve a satisfactory

quality of service

Neighbor Selection Policy

• BT tit-for-tat policy– Peers select other peers according to their

observed behaviors– Significant number of peers suffer from slow

start

• Randomized tit-for-tat policy– At the beginning of every playback each

peer selects neighbors at random– Rapid diffusion of new chunks among peers– Gives more free tries to a larger number of

peers in the swarm to download chunks

Discussion

• Should combine our neighbor selection policy with our sliding window chunk selection policy

• Can then achieve an excellent QoS with playback delays as short as 30 s as long as video consumption rate does not exceed 60 % of network link bandwidth.

Comparison withClient-Server Solutions

10

100

1000

10000

100000

1000000

10 100 1000Number of Peers

Ave

rag

e P

layb

ack

De

lay

(s)

Modified BTClient-ServerFive mirrors

Chunk size selection• Small chunks

– Result in faster chunk downloads– Occasion more processing overhead

• Larger chunks– Cause slow starts for every sliding window

• Our simulations indicate that 256KB is a good compromise

10

20

30

40

50

6070

80

90

100

110

64 128 256 512 1024Chunk Size (KB)

Succ

ess

Rat

io (p

erce

nt)

Randomized-tit-for-tat with playback delay 30 sRandomized-tit-for-tat with playback delay 120 sTit-for-tat with playback delay 30 sTit-for-tat with playback delay 120 s

Resource-critical region (4 Mbps)

Premature Departures• Peer departures before the end of

the session – Can be voluntary or resulting from

network failures– When a peer leaves the swarm, it

tears down connections to its neighbors

– Each of its neighbors to lose one of their active connections

Can tolerate the loss ofat least 60 % of the peers

Future Work• Current work

– On-demand streaming

• Robustness– Detect malicious and selfish peers– Incorporate a trust management

system into the protocol

• Performance evaluation– Conduct a comparison study

Thank You – Questions?

Contact: [email protected]@cs.uh.edu

Extra slides

nVoD• Dynamics of client participations, i.e.

churn– Clients do no synchronize their viewing

times

• Serve many peers even if they arrive according to different patterns

Admission control policy (I)

• Determine if a new peer request can be accepted without violating the QoS requirements of the existing customers

• Based on server oriented staggered broadcast scheme– Combine P2P streaming and staggered

broadcasting ensures high QoS– Beneficial for popular videos

Admission control policy (II)• Use tracker to batch clients arriving

close in time form a session– Closeness is determined by threshold θ

• Service latency, though server oriented, is independent of number of clients– Can handle flash crowds

• Dedicate η channels for each video making worst service latency, w D/η

Results

• We use the M/D/η queuing model to estimate the effect on the playback delay experienced by the peers