[email protected] texas a & m university1 impact of bandwidth-delay product and non-responsive...

32
[email protected] Texas A & M University 1 Impact of bandwidth- delay product and non- responsive flows on the performance of queue management schemes Zhili Zhao A.L.NarasimhaReddy Department of Electrical Engineering Texas A&M University [email protected] June 23 2004, ICC

Post on 19-Dec-2015

214 views

Category:

Documents


0 download

TRANSCRIPT

[email protected] Texas A & M University 1

Impact of bandwidth-delay product and non-responsive flows on the performance of

queue management schemes

Zhili Zhao A.L.NarasimhaReddy

Department of Electrical EngineeringTexas A&M [email protected] 23 2004, ICC

[email protected] Texas A & M University 2

Agenda

Motivation

• Performance Evaluation

• Results & Analysis

• Discussion

[email protected] Texas A & M University 3

Current Network Workload

• Traffic composition in current network– ~60% Long-term TCP (LTRFs), ~30% Short-

term TCP (STFs), ~10% Long-term UDP (LTNRFs)

• Nonresponsive traffic is increasing– STF + LTNRF

• Link capacities are increasing

• What is the consequence?

[email protected] Texas A & M University 4

The Trends

• Long-term UDP traffic increases– Multimedia applications– Impact on TCP applications from the non-

responsive UDP traffic

UDP arrival rate

UDP Goodput

TCP Goodput

[email protected] Texas A & M University 5

The Trends (cont’d)

• Link capacity increases– Larger buffer memory required if current

rules followed (buffer = BW * delay product)• Increasing queuing delay• Larger memories constrain router speeds• What if smaller buffers used in the future?

[email protected] Texas A & M University 6

Overview of Paper

• Study buffer management policies in the light of– Increasing Non-responsive loads– Increasing link speeds

• Policies studied– Droptail– RED– RED with ECN

[email protected] Texas A & M University 7

Queue Management Schemes

• RED

• RED-ECN (RED w/ ECN enabled)

• Droptail

P

Pmax

Minth

1

Maxth AvgQlen

P1

Q10

[email protected] Texas A & M University 8

Agenda

• Motivations

Performance Evaluation

• Results & Analysis

• Discussion

[email protected] Texas A & M University 9

Performance Evaluation

• Different workloads w/ higher non-responsive loads: 60%

• Different link capacities: 5Mb, 35Mb, 100Mb

• Different buffer sizes: 1/3 or 1 or 3 * 1 BWDP

* Buffer size is in the unit of packet (1 packet = 1000 bytes)

Multiple of BWDP

Link Capacity (Mb)

5 35 100

1/3 25 200 500

1 75 500 1500

3 225 1500 4500

[email protected] Texas A & M University 10

Workload Characteristics

• TCP(FTP): LTRFs

• UDP(CBR): LTNRFs– 60%, 55%, 30%– 1Mbps or 0.5Mbps

• Short-term TCP: STFs– 0%, 5%, 30%– 10packets/10s on average

[email protected] Texas A & M University 11

• Number of flows under 35Mb link contributing to 60% non-responsive load

* Each LTRNF sends at 1Mbps

* Numbers of flows under 5Mb and 100Mb links are scaled accordingly

Workload Characteristics (cont’d)

STF Load

35 Mb Link

# of LTRFs # of STFs # of LTNRFs

0% 55 0 22

5% 55 250 22

30% 55 1300 14

[email protected] Texas A & M University 12

Performance Metrics

• Realized TCP throughput

• Average queuing delay

• Link utilization

• Standard deviation of queuing delay

[email protected] Texas A & M University 13

Simulation Setup

Simulation Topology

R1 R2

TCPs

CBRs

TCP Sinks

CBR SinksRED/DT, Tp=50ms

[email protected] Texas A & M University 14

Link Characteristics

• Capacities between R1 and R2: 5Mb, 35Mb, 100Mb

• Total round-trip propagation delay: 120ms

• Queue management schemes deployed between R1 and R2: RED/RED-ECN/ Droptail

[email protected] Texas A & M University 15

Agenda

• Motivations

• Performance Evaluation

• Simulation Setup

Results & Analysis

• Discussion

[email protected] Texas A & M University 16

Sets of Simulations

• Changing buffer sizes

• Changing link capacities

• Changing STF loads

[email protected] Texas A & M University 17

Set 1: Changing Buffer Sizes

• Correlation between average queuing delay & BWDP

DropTail

RED/RED-ECN

[email protected] Texas A & M University 18

Realized TCP Throughput

• 30% STF load– Changing buffer size from 1/3 to 3 BWDPs

5Mb Link 100Mb Link

[email protected] Texas A & M University 19

Realized TCP Throughput (cont’d)

• TCP Throughput higher with DropTail

• Difference decreases with larger buffer sizes

• Avg. Qdelay from REDs much smaller than that from Droptail

• RED-ECN marginally improves throughput over RED

[email protected] Texas A & M University 20

Link Utilization

• 30% STF load

• Droptail has higher utilization with smaller buffers

• Difference decreases with larger buffers

Multiple of

BWDP

5Mb Link 35Mb Link 100Mb Link

REDRED-ECN

DT REDRED-ECN DT RED

RED-ECN

DT

1/3 .943 .947 .974 .961 .955 .968 .967 .959 .971

1 .963 .965 .975 .967 .967 .971 .971 .971 .972

3 .973 .973 .976 .969 .970 .972 .972 .972 .973

[email protected] Texas A & M University 21

Std. Dev. Of Queuing Delay

• 30% STF + 30% ON/OFF LTNRF load

5Mb Link 100Mb Link

[email protected] Texas A & M University 22

Std. Dev. Of Queuing Delay (cont’d)

• Droptail has comparable deviation at 5Mb link capacity

• REDs have less deviation under higher buffer sizes and higher bandwidths

• REDs are more suitable for jitter sensitive applications

[email protected] Texas A & M University 23

Set 2: Changing Link Capacities

• 30% STF load

• Relative Avg Queuing Delay = Avg Queuing Delay/RT Propagation Delay

ECN Disabled ECN Enabled

[email protected] Texas A & M University 24

Relative Avg Queuing Delay

• Droptail has Relative Avg Queuing Delay close to the buffer size (x * BWDP)

• REDs has significantly smaller Avg Queuing Delay (~1/3 of DropTail)

• Changing link capacities have almost no impact

[email protected] Texas A & M University 25

Drop/Marking Rate

• 30% STF load, 1 BWDP

1 Format: Drop Rate2 Format: Drop Rate/Marking Rate

QMType

of Flow

Link Capacity (Mb)

5 35 100

REDLTRF1 .03627 .03112 .02503

LTNRF .03681 .03891 .02814

RED-ECN

LTRF2 .00352/.04256 0/.04123 0/.03036

LTNRF .04688 .05352 .03406

DTLTRF1 .01787 .01992 .01662

LTNRF .10229 .09954 .12189

[email protected] Texas A & M University 26

Set 3: Changing STF Loads

• 1 BWDP

• Normalized TCP throughput = TCP throughput / (UDP+TCP) throughput

ECN Disabled ECN Enabled

[email protected] Texas A & M University 27

Comparison of Throughputs

• STF throughputs are almost constant over 3 queue management schemes

• Difference of TCP throughputs decreases while STF load increases

STF Load

RED RED-ECN DT

LTRF STF LTNRF LTRF STF LTNRF LTRF STF LTNRF

0% .505 0 .461 .507 0 .458 .730 0 .238

5% .457 .051 .460 .460 .051 .456 .729 .051 .190

30% .454 .272 .244 .457 .271 .242 .478 .272 .220

[email protected] Texas A & M University 28

Agenda

• Motivations

• Performance Evaluation

• Simulation Setup

• Results & Analysis

Discussion

[email protected] Texas A & M University 29

Discussion

• Performance metrics of REDs comparable to or prevailing over DT w/ the existence of STF load and in high BWDP cases

• Marginal improvement of long-term TCP throughput from RED-ECN with TCP-Sack compared to RED

[email protected] Texas A & M University 30

Discussion (cont’d)

• Minor impact on Avg Queuing Delay or TCP throughput by changing either link capacities or STF loads

• With the existence of STFs:BWDP Choose? TCP Throughput Avg QDelay & Jitter

<< 1 BWDP

(small bw/buffer, low-delay link)

Droptail Better Comparable

>= 1 BWDP

(large bw/buffer, high-delay link)

RED/RED-ECN

Comparable Significantly lower

[email protected] Texas A & M University 31

Thank you

June, 2004

[email protected] Texas A & M University 32

Related Work

• S. Floyd et. al. “Internet needs better models”

• C. Diot et. al. “Aggregated Traffic Performance with Active Queue Management and Drop from Tail” & “Reasons not to deploy RED”

• K. Jeffay et. al. “Tuning RED for Web Traffic”