discovering in-network caching policies in ndn networks...
TRANSCRIPT
Discovering in-network Caching Policies in NDN Networksfrom a Measurement Perspective
Chengyu Fan (Colostate), Susmit Shannigrahi (Tennessee Tech), Christos Papadopoulos (Colostate), Craig Partridge (Colostate)
NDN requires measuring in-network states• Network measurement tools cover various aspects in IP networks
– Network performance, states (routing, configurations, and topology, etc.), and traffic
• NDN measurements must capture in-network states– Caching policies, forwarding strategies, etc.
2
Goals and Assumptions• Our goal: first work to detect caching decisions from a measurement perspective
– Caching is a central feature of NDN
– Caching policy = caching decision + cache replacement
– Multiple caching decisions may exist in NDN networks, and they may interact poorly
• Assumptions– The best-route forwarding strategy and uniform caching decision policy are used
– Priority-FIFO cache replacement policy is used (by default)
– Only one producer exists
3
List of caching decisions developed for NDN• Caching Everything Everywhere (CEE)
– Cache every Data chunk locally
• Leave Copy Down (LCD) – Move down the cached copy one hop down
• Label-caching– Pre-decide assign labels to routers, caching chunks whose ID%N match the label value
• Static probabilistic caching (Prob-20, Prob-50, Prob-80)– Pre-define the probability value, and compare it with the generated random number for each chunk
• Dynamic probabilistic caching (ProbCache, ProbCache-inv)– Dynamically calculate a cache weight based on the ratio of hop count of Data and Interest
4
Measurement procedure
5
NDN
Client Server
1. Send out a train (50) of Interests with the given name prefix• Each contains a unique name: /<name-prefix>/<chunk-id>
3. Save the hop count for each chunk
4. Repeat step 1 ~ 3 for ten times (cached copy can satisfy duplicate request), and plot the hop count distribution in the figure
Pre-defined the target name prefix, Data payload size, and other Data packet parameters
2. Answer each Interest
Example: LCD caching decision
• Leave Copy Down (LCD) caching decision mechanism– The requested chunks is cached only at the cache that below the location of the hit on the path
• Takeaways– All chunks are cached at specific hops in each round, and the hop count across rounds differs
6
R1 R2 R3Client Serverrequest/data/chunk/<1, 2, or 3>
respond/data/chunk/<1, 2, or 3>
/data/chunk/1
/data/chunk/2
/data/chunk/3
/data/chunk/1
/data/chunk/2
/data/chunk/3
/data/chunk/1
/data/chunk/2
/data/chunk/3
1 2 3 4 5 6 7 8 9 10#round to send requests
0
2
4
6
8
10
12
Hop
coun
t
cachingLbl
lcd
Fingerprint of LCD mechanism• Simulations with ndnSIM
– A linear topology with 10 routers
• Two metrics uniquely identify a caching decision– Hop count distribution in one round
– The distribution change cross multiple rounds
7
Fingerprints for other mechanisms• Caching Everything Everywhere (CEE) - cache every Data chunk locally
• Label-caching - cache chunks whose ID%N match the pre-assigned label value
• Static probabilistic caching (Prob-X) - compare the random number with pre-defined probability value
8Label-caching Prob-50CEE
1 2 3 4 5 6 7 8 9 10#round to send requests
0
2
4
6
8
10
12
Hop
coun
t
cachingLbl
cee
1 2 3 4 5 6 7 8 9 10#round to send requests
0
2
4
6
8
10
12
Hop
coun
t
cachingLbl
label-caching
1 2 3 4 5 6 7 8 9 10#round to send requests
0
2
4
6
8
10
12
Hop
coun
t
cachingLbl
prob-50
1 2 3 4 5 6 7 8 9 10#round to send requests
0
2
4
6
8
10
12
Hop
coun
t
cachingLbl
probCache-inv
1 2 3 4 5 6 7 8 9 10#round to send requests
0
2
4
6
8
10
12
Hop
coun
t
cachingLbl
probCache
1 2 3 4 5 6 7 8 9 10#round to send requests
0
2
4
6
8
10
12
Hop
coun
t
cachingLbl
prob-20
1 2 3 4 5 6 7 8 9 10#round to send requests
0
2
4
6
8
10
12
Hop
coun
t
cachingLbl
prob-80
Fingerprints for probabilistic caching mechanisms
9
Static Dynamic• Dynamic probabilistic caching:
Ø Calculate a weight based on the ratio of the hop count of Data and Interest
• ProbCache-inv converges faster than prob-20
• ProbCache converges slower than prob-80
• CacheWeight = #hop(Data) / #hop(Interest)• Rand() < CacheWeight
• Rand() < (1 - CacheWeight)
Cross traffic may hurt measurements
• Cross traffic may exist in networks– Occupy cache slots
– Evict cached probe packets
– Impact detecting caching policies
• We check the effects by introducing cross traffic at two ends of the linear topology
10
NDN router
Content Store MeasurementServer
MeasurementClient
Cross traffic
Robustness to cross traffic• We can identify the caching mechanisms, as most plot shapes are almost unchanged
11
Server-side cross trafficClient-side cross traffic
1 2 3 4 5 6 7 8 9 10#round to send requests
0
2
4
6
8
10
12
Hop
coun
t
cachingLbl
prob-20
1 2 3 4 5 6 7 8 9 10#round to send requests
0
2
4
6
8
10
12
Hop
coun
t
cachingLbl
prob-20
Robustness to cross traffic (cont.)• Shapes for LCD are changed, but it has the unique feature
12
Server-side cross trafficClient-side cross traffic
1 2 3 4 5 6 7 8 9 10#round to send requests
0
2
4
6
8
10
12
Hop
coun
tcachingLbl
lcd
1 2 3 4 5 6 7 8 9 10#round to send requests
0
2
4
6
8
10
12
Hop
coun
t
cachingLbl
lcd
1 2 3 4 5 6 7 8 9 10#round to send requests
0
2
4
6
8
10
12
Hop
coun
t
cachingLbl
prob-20
1 2 3 4 5 6 7 8 9 10#round to send requests
0
2
4
6
8
10
12
Hop
coun
t
cachingLbl
prob-50
1 2 3 4 5 6 7 8 9 10#round to send requests
0
2
4
6
8
10
12
Hop
coun
t
cachingLbl
prob-80
0.2 0.5 0.8Static Probability
0
2
4
6
8
10
12
Hop
coun
t
10865433221
25126321
4082
Estimate static probabilistic value
13
Ideal shapes for 1st round
Shapes based on simulation results
1 2 3 4 5 6 7 8 9 10#round to send requests
80
100
120
140
160
180
200
RTT(us)
cachingLbl
prob-50
1 2 3 4 5 6 7 8 9 10#round to send requests
80
100
120
140
160
180
200
RTT(us)
cachingLbl
lcd
Detecting on real topology• The NDN stack does not expose the hop count information to applications
– Can we use delays to infer the correct hops?
• Use topology Rocketfuel 7018 with randomly chosen client and server
14
LCD Prob-20Delays may produce “correct” hop counts
Delays do not always infer the correct hops• In some cases, link delays may not be identical with hops
15
1 2 3 4 5 6 7 8 9 10#rounds to send requests
50000
100000
150000
200000
250000
300000
350000
400000
RTT(us)
cachingLbl
prob-50
1 2 3 4 5 6 7 8 9 10#rounds to send requests
50000
100000
150000
200000
250000
300000
350000
400000
RTT(us)
cachingLbl
lcd
Estimating hop counts• Using clustering algorithms (e.g. K-means) to group samples with similar delays
– The figures approximately show the correct shapes
16
2 3 4 5 6 7 8 9 10#rounds to send requests
0
1
2
3
4
5
6
7
Estim
ated
Hops
cachingLbl
lcd
2 3 4 5 6 7 8 9 10#rounds to send requests
0
1
2
3
4
5
6
7
Estim
ated
Hops
cachingLbl
prob-50
Conclusion• Proposed a novel method to extract fingerprints for caching decision mechanisms
• The method can detect caching decisions mechanisms from end hosts– Not sensitive to cross traffic
– Can estimate probability value
• Evaluated the method on a real topology– Applications use delays to estimate hop counts
17
Future work• Evaluate the method with more caching mechanisms on a real testbed (i.e. NDN
testbed)
• Study the robustness of our method with other cache replacement policies
• Integrate the measurement tool with the NDN measurement framework designed by NIST [1] [2]
• Study the scenarios where multiple producers exist and other forwarding strategies are used
18[1] Pesavento, Davide, et al. "A network measurement framework for named data networks." Proceedings of the 4th ACM Conference on Information-Centric Networking. 2017.[2] Nichols, Kathleen. "Lessons Learned Building a Secure Network Measurement Framework using Basic NDN." Proceedings of the 6th ACM Conference on Information-Centric Networking. 2019.
Thanks!19