1 Emulating AQM from End Hosts Presenters: Syed Zaidi Ivor Rodrigues

Download 1 Emulating AQM from End Hosts Presenters: Syed Zaidi Ivor Rodrigues

Post on 21-Dec-2015

214 views

Category:

Documents

1 download

Embed Size (px)

TRANSCRIPT

<ul><li> Slide 1 </li> <li> 1 Emulating AQM from End Hosts Presenters: Syed Zaidi Ivor Rodrigues </li> <li> Slide 2 </li> <li> 2 Introduction Congestion control at the End Host Treating the Network as a Black Box Main indicator Round Trip Time Probabilistic Early Response TCP (PERT) </li> <li> Slide 3 </li> <li> 3 Motivation Implementing AQM at the Router is not easy. Current techniques depend on Packet loss to detect congestion. Easier to modify TCP stack at the End Host. Can work any AQM mechanism at the router. </li> <li> Slide 4 </li> <li> 4 Challenges RTT based estimation have been characterized to be inaccurate. Hard to measure Queuing Delays when they are small compared to the RTT. </li> <li> Slide 5 </li> <li> 5 Accuracy of End-host Based Congestion Estimation Previous studies looked at the relation between increase in RTT and packet loss for a single stream. Results 1. Losses are preceded by increase in RTT in very few cases. 2. Responding to a false prediction results in severe loss in performance. </li> <li> Slide 6 </li> <li> 6 Accuracy of End-host Based Congestion Estimation 4 is false negative and 5 is false positive </li> <li> Slide 7 </li> <li> 7 Accuracy of End-host Based Congestion Estimation Previous studies claim transition 5 happens more then transition 2 Limitation of previous studies is to look at the relation between higher RTT in packet loss for a single flow Packet loss should be looked at the router not for a single flow </li> <li> Slide 8 </li> <li> 8 Accuracy of End-host Based Congestion Estimation Ns-2 simulation Two routers connected to a100 Mps link with end nodes having 500 Mbps link, different combination of long term and short term flows. The reference flows have RTT of 60ms which is equal to 12000Km. </li> <li> Slide 9 </li> <li> 9 Different Congestions Predictors Efficiency of Packet loss prediction (Number of 2 transitions)/(2 transitions +5 transitions) False Positives (Number of 5 transitions)/(2 transitions +5 transitions) False Negatives (Number of 4 transitions)/(2 transitions +4 transitions) </li> <li> Slide 10 </li> <li> 10 Previous Work In 1989 first paper was published proposing to enhance TCP with delay-based congestion avoidance. TRI-S: Throughput is used to detect congestion instead of delay DUAL: Current RTT is compared with Average of Minimum and Maximum RTT Vegas: Achieved throughput is compared to expected throughput based on minimum Observed RTT. CIM: Moving Average of small number of RTT samples is compared with moving average of large number of RTT samples CARD: Congestion Avoidance using RTT Delay </li> <li> Slide 11 </li> <li> 11 Improving Congestion Prediction *Vegas, Card, TRI-S, and dual obtain RTT samples once per RTT. Smoothed RTT Exponential Weighted Moving Average </li> <li> Slide 12 </li> <li> 12 Improving Congestion Prediction We improve accuracy by more frequent sampling and history information End-host congestion prediction is not perfect, thus we need mechanisms to counter this inaccuracy. </li> <li> Slide 13 </li> <li> 13 Response to Congestion Prediction How do we reduce the impact of FALSE Positives? Keeping the amount of Response small. Respond Probabilistically. </li> <li> Slide 14 </li> <li> 14 Response to Congestion Prediction How do we reduce the impact of FALSE Positives? Keeping the amount of Response small. Respond Probabilistically. Not much Loss in throughput Maintains High link Utilization Buildup of the bottleneck queue may not be cleared out quickly. VEGAS </li> <li> Slide 15 </li> <li> 15 Response to Congestion Prediction How do we reduce the impact of FALSE Positives? No Loss of throughput Maintains High link Utilization Buildup of the bottleneck queue may not be cleared out quickly. VEGAS This causes a tradeoff in the fairness properties of TCP to maintain high link utilization Vegas uses additive decrease for early congestion response </li> <li> Slide 16 </li> <li> 16 Response to Congestion Prediction How do we reduce the impact of FALSE Positives? No Loss of throughput Maintains High link Utilization Buildup of the bottleneck queue may not be cleared out quickly. VEGAS This causes a tradeoff in the fairness properties of TCP to maintain high link utilization AI/AD for these transitions will result in compromising the fairness properties of the protocol. Vegas uses additive decrease for early congestion response </li> <li> Slide 17 </li> <li> 17 Response to Congestion Prediction How do we reduce the impact of FALSE Positives? No Loss of throughput Maintains High link Utilization Buildup of the bottleneck queue may not be cleared out quickly. VEGAS Compared to the flow starting earlier, flows that start late may have a different idea of the Minimum RTT on the path. This gives an unfair advantage to flows starting later, giving them more share of the Bandwidth. RTT= Propagation Delay + Queuing Delay </li> <li> Slide 18 </li> <li> 18 Response to Congestion Prediction How do we reduce the impact of FALSE Positives? Keeping the amount of Response small. Respond Probabilistically. When the probability of false positives is high, the probability of response to an early congestion signal should be low High Probability of False Positives Low Response! Low Probability of False Positives High response! </li> <li> Slide 19 </li> <li> 19 Designing the Probabilistic Response False positives occur False Positives occur when the queue length is smaller. False positives occur when the queue length is less than 50% of the total queue size. srtt 0.99 is the signal congestion predictor </li> <li> Slide 20 </li> <li> 20 Designing the Probabilistic Response what should be my response function? Response should be Small for low queue size Response should large for large queue size. srtt 0.99 is the signal congestion predictor </li> <li> Slide 21 </li> <li> 21 Designing the Probabilistic Response what should be my response function? Thus we emulate the probabilistic response function of RED. Thus P - probabilistic E - early R - response T - TCP </li> <li> Slide 22 </li> <li> 22 PERT T min = Minimum Threshold =P+ 5ms=5ms T max = Maximum Threshold=P+10ms=10ms p max =maximum probablity of response=.05 P= propagation delay= ??= 0!!! </li> <li> Slide 23 </li> <li> 23 Probabilistic Response Curve used by PERT </li> <li> Slide 24 </li> <li> 24 Is it necessary to have a 50% reduction in the congestion window in case of early response?? Routers are commonly set to the Bandwidth Delay Product of the Link since the TCP flow reduces its window by 50% If B is the buffer size and f is the window reduction factor, the relationship between them is given by Since the flows respond before the bottleneck queue is full, a large multiplicative decrease can result in lower link utilization but reducing the amount of response make it hard to empty the buffer, leading to unfairness. </li> <li> Slide 25 </li> <li> 25 Experimental Evaluation Impact of Bottleneck link Bandwidth Setup: Single bottleneck with bottle neck bandwidth between 1 Mbps to 1Gbps, RTT from 10ms to 1s. Simulations run for 400s. Results measured between stable period. RTT set to 60ms. </li> <li> Slide 26 </li> <li> 26 Experimental Evaluation Impact of Round Trip Delays The bottleneck link bandwidth is 150 Mbps and number of flows is 50. The end-to-end delay is varied from 10ms to 1s. </li> <li> Slide 27 </li> <li> 27 Experimental Evaluation Impact of Varying the Number of Long-term Flows. Link bandwidth set to 500 Mbps, end to end delay set to 60ms. </li> <li> Slide 28 </li> <li> 28 Bottle Neck Link b/w - 150Mbps End-End Delay - 60ms Long term Flows 50 Short Term varying from 10 to 1000 Bottle Neck Link b/w - 150Mbps End-End Delay n * 12 1 </li> <li> 32 Modeling of PERT Note: PERT makes its decision at the end host and not the router. Incoming rate y(t) =&gt; ( 5 ) ( 6 ) ( 4 ) </li> <li> Slide 33 </li> <li> 33 Modeling of PERT By equation (A) ( 7 ) </li> <li> Slide 34 </li> <li> 34 Simulations Stability </li> <li> Slide 35 </li> <li> 35 Emulating PI </li> <li> Slide 36 </li> <li> 36 Discussion Impact of Reverse traffic Co-existence with Non-Proactive Flows </li> <li> Slide 37 </li> <li> 37 Conclusion Congestion prediction at end host is more accurate than characterized by previous studies, but requires further research to improve the accuracy of end host delay-based predictors. PERT emulates the behavior of AQM in the congestion response function Benefits are similar to ECN Its link utilization is similar to router based schemes PERT is flexible, in the sense that other AQM schemes can be emulated. </li> <li> Slide 38 </li> <li> 38 Few of Our Observations The authors have put a good deal of effort, but is its as simple and eye-catching if we implemented on any kind of network in real time? What modifications have to now be made at the end host, such as additional hardware/software and cost?? Is it compatible with other versions of TCP? Will this implementation give an advantage to other connections less/least proactive connections or misbehaving connections to take advantage of my readiness to lessen the job a router has to perform? </li> <li> Slide 39 </li> <li> 39 Questions </li> </ul>