supporting cooperative caching in disruption tolerant networks

35
Supporting Cooperative Caching in Disruption Tolerant Networks Wei Gao and Guohong Cao Dept. of Computer Science and Engineering Pennsylvania State University Arun Iyengar and Mudhakar Srivatsa IBM T. J. Watson Research Center

Upload: emilie

Post on 23-Feb-2016

119 views

Category:

Documents


0 download

DESCRIPTION

Supporting Cooperative Caching in Disruption Tolerant Networks. Wei Gao and Guohong Cao Dept. of Computer Science and Engineering Pennsylvania State University Arun Iyengar and Mudhakar Srivatsa IBM T. J. Watson Research Center. Outline. Introduction Network Central Locations (NCLs) - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Supporting Cooperative Caching in Disruption Tolerant Networks

Supporting Cooperative Caching in Disruption Tolerant Networks

Wei Gao and Guohong CaoDept. of Computer Science and EngineeringPennsylvania State University

Arun Iyengar and Mudhakar SrivatsaIBM T. J. Watson Research Center

Page 2: Supporting Cooperative Caching in Disruption Tolerant Networks

Outline

IntroductionNetwork Central Locations (NCLs)Caching SchemePerformance EvaluationSummary & Future Work

Page 3: Supporting Cooperative Caching in Disruption Tolerant Networks

Disruption Tolerant Networks (DTNs)Consist of hand-held personal mobile devices

Laptops, PDAs, SmartphonesOpportunistic and intermittent network

connectivityResult of node mobility, device power outage or

malicious attacksHard to maintain end-to-end communication links

Data transmission via opportunistic contacts

Page 4: Supporting Cooperative Caching in Disruption Tolerant Networks

Data Transmission in DTNs Carry-and-Forward

Mobile nodes physically carry data as relaysForwarding data opportunistically upon contactsMajor problem: appropriate relay selection

B

A C

0.7

0.5

Page 5: Supporting Cooperative Caching in Disruption Tolerant Networks

Providing Data Access in DTNsActive data dissemination

Data source actively push data to users being interested in the data

Publish/SubscribeBrokers forward data to users according to their

subscriptionsCaching

Page 6: Supporting Cooperative Caching in Disruption Tolerant Networks

Our FocusCooperative caching in DTNs

Cache data at appropriate network locationsQueries in the future are responded faster

Major challengesWhere to cache

Hard to monitor the network query pattern and determine the caching location for minimizing data access delay

How many to cache Uncertainty of opportunistic data transmission Tradeoff between data accessibility and caching overhead

Page 7: Supporting Cooperative Caching in Disruption Tolerant Networks

ChallengesWireless ad-hoc network with end-to-end network

connectivityA

FB

C

D

E

H

K

q1(d1)d1

d2

q2(d1)

q3(d2)

G

d1

d2

Incidentally cache pass-by data

Page 8: Supporting Cooperative Caching in Disruption Tolerant Networks

A

FB

C

D

E

H

K

q1(d1)d1

d2

q2(d1)

q3(d2)

G

ChallengesDisruption-Tolerant Networks (DTNs)

d1

d2

Page 9: Supporting Cooperative Caching in Disruption Tolerant Networks

Basic Ideas Intentionally cache data at a set of Network

Central Locations (NCLs)A group of mobile nodes represented by a central nodeBeing easily accessed by other nodes in the network

Utility-based cache replacementPopular data is cached near central nodes

Coordinate multiple caching nodesTradeoff between data accessibility and caching

overhead

Page 10: Supporting Cooperative Caching in Disruption Tolerant Networks

The Big Picture

DataQuery

A

C1C2

B

C

D

S

RNCL 1

NCL 2

Page 11: Supporting Cooperative Caching in Disruption Tolerant Networks

Outline

IntroductionNetwork Central Locations (NCLs)Caching SchemePerformance EvaluationSummary & Future Work

Page 12: Supporting Cooperative Caching in Disruption Tolerant Networks

Network ModelingContacts in DTNs are described by network

contact graph G(V, E)Contact process between nodes i and j is modeled as

an edge eij on network contact graphThe contacts between nodes i and j are modeled as a

Poisson process with contact rate

Page 13: Supporting Cooperative Caching in Disruption Tolerant Networks

NCL SelectionCentral nodes representing NCLs are selected

based on a probabilistic metricCentral nodes can be easily accessed by other nodesNumber (K) of NCLs is a pre-defined parameter

Central nodes are selected in a centralized manner based on global network knowledge

Page 14: Supporting Cooperative Caching in Disruption Tolerant Networks

NCL Selection MetricMetric of node i to be selected as central node

Opportunistic path:

Average probability that data can be transmitted from a random non-central node to i within time T

Set of existing central nodes

Page 15: Supporting Cooperative Caching in Disruption Tolerant Networks

Trace-based ValidationThe applicability of NCL selection in realistic

DTN tracesOnly few nodes have high metric valuesOur metric reflects the heterogeneity of node contact

pattern

Page 16: Supporting Cooperative Caching in Disruption Tolerant Networks

Outline

IntroductionNetwork Central Locations (NCLs)Caching SchemePerformance EvaluationSummary & Future Work

Page 17: Supporting Cooperative Caching in Disruption Tolerant Networks

Caching Scheme Overview Initial caching locations

Data source pushes data to NCLsQuerying data

Requesters pulls data from NCLs by multicasting queries to NCLs

Utility-based cache replacementWhen caching nodes contact each otherMore popular data is cached nearer to central nodes

Page 18: Supporting Cooperative Caching in Disruption Tolerant Networks

Initial Caching LocationsCentral nodes are prioritized to cache dataData is cached at nodes near central nodes if their

buffer are full

Cache

S

C1

C3

C2

R11R11

R12R12

R13R13

R21R21 R2

2R22 R2

3R23

R31R31 R3

2R32 R3

3R33

R24R24

R34R34

Page 19: Supporting Cooperative Caching in Disruption Tolerant Networks

Querying DataRequester multicasts query to central nodes

If data is cached at central nodes Respond directly

Otherwise Forward query

to caching nodesR

C1

C3

C2A

B

Page 20: Supporting Cooperative Caching in Disruption Tolerant Networks

Analyzing Data Access DelayRelated to the number (K) of NCLsK is small

1./3.: Data transmission delay between requester and NCLs is longer

2.: Data can be cached nearer to the central node

K is largeMetric values of some central

nodes may not be high1.

2.

3.

Page 21: Supporting Cooperative Caching in Disruption Tolerant Networks

Utility-based Cache ReplacementTwo caching nodes exchange their cached data

when they contact each otherCache popular data near central nodes to minimize the

cumulative data access delayData utility

The popularity of data in the networkThe distance from the caching node to the

corresponding central node

Page 22: Supporting Cooperative Caching in Disruption Tolerant Networks

Utility-based Cache ReplacementKnapsack formulation

Data at nodes A and B:

Data utilities

Data sizes

Each data is only cached at one node

Page 23: Supporting Cooperative Caching in Disruption Tolerant Networks

Utility-based Cache ReplacementExample: A is nearer to the central node

Page 24: Supporting Cooperative Caching in Disruption Tolerant Networks

Utility-based Cache ReplacementLess popular data may be removed from cache if

caching buffer is very limited

Page 25: Supporting Cooperative Caching in Disruption Tolerant Networks

Outline

IntroductionNetwork Central Locations (NCLs)Caching SchemePerformance EvaluationSummary & Future Work

Page 26: Supporting Cooperative Caching in Disruption Tolerant Networks

Experimental SettingsData generation

Each node periodically determines whether to generate new data with a fixed probability pG=0.2

Data lifetime is uniformly distributed in [0.5T, 1.5T]Query pattern

Randomly generated at all nodes periodicallyQuery pattern follows Zipf distributionTime constraint of query: T/2

Page 27: Supporting Cooperative Caching in Disruption Tolerant Networks

Caching PerformanceOn the MIT Reality trace with different data

lifetime (T)

Lower caching overheadHigher successful ratio

Page 28: Supporting Cooperative Caching in Disruption Tolerant Networks

Effectiveness of Cache ReplacementDifferent replacement strategies on MIT Reality

trace

Improves successful ratio Similar replacement overhead

Page 29: Supporting Cooperative Caching in Disruption Tolerant Networks

Impact of the Number of NCLsOn Infocom06 trace with T=3 hours

Optimal: K=5

Page 30: Supporting Cooperative Caching in Disruption Tolerant Networks

SummaryCooperative caching in DTNs

Intentionally cache data at a pre-specified set of NCLs which can be easily accessed by other nodes

NCL selection based on a probabilistic metricUtility-based cache replacement to maximize the

cumulative caching performanceFuture work

Distributed NCL selectionLoad balancing on NCLs

Page 31: Supporting Cooperative Caching in Disruption Tolerant Networks

Thank you!

http://mcn.cse.psu.edu

The paper and slides are also available at:http://www.cse.psu.edu/~wxg139

Page 32: Supporting Cooperative Caching in Disruption Tolerant Networks

TracesRecord user contacts at university campusVarious wireless interfaces

Bluetooth: periodically detect nearby peersWiFi: associate to the best Access Point (AP)

Page 33: Supporting Cooperative Caching in Disruption Tolerant Networks

Opportunistic PathEach hop corresponds to stochastic contact

process with pairwise contact rates

Xk: inter-contact time between nodes Nk and Nk+1

Exponentially distributed Y: the time needed to transmit data from A to B along

the path follows hypoexponential distribution

Page 34: Supporting Cooperative Caching in Disruption Tolerant Networks

Probabilistic Data Selection in Cache Replacement

Fairness of cachingEvery node caches popular dataCaching effectiveness is only locally maximized

Probabilistic data selectionEvery caching node probabilistically determine

whether to cache the data Data utility is used as the probability

Page 35: Supporting Cooperative Caching in Disruption Tolerant Networks

ComparisonsNoCache, where caching is not used for data

accessRandomCache, where each requester caches the

received data randomlyCacheData, which is proposed for cooperative

caching in wireless ad-hoc networksBundleCache, which caches network data as

bundles